Here’s what my daily reading habit looks like: scroll through Facebook and Twitter feeds, various Flipboard sections, the first page of Hacker News, skim through the headlines/snippets, and save to Pocket anything that I might want to read fully. And I always try to get through all these saved links before the end of the day.
This is excluding books & magazines (New Yorker, etc) as they are less intentional.
So above a layer of curation (Flipboard) and social selection (Twitter/Facebook/HN) there is a final layer of curation: me.
What’s missing here?
- Serendipity. Strictly speaking, the Flipboard/Twitter/etc. filter is above a self-curation layer where I choose which people and what topics to follow. This excludes anything outside of those bounds, and hurts serendipitous discovery.
- Weak Signals of what my network is reading. What they share is a strong signal of what they really like, but they don’t necessarily share everything they liked reading. Enough occurrences of the same weak signal could make it a strong one.
- Conversations that happen in the comments sections or tweet replies have rich, varied opinions and information. It seems intractable to parse through the sheer amount of content there, so some aggregation or summarization would be great.
- Contextual Relevance. 4-5 years ago a number of startups were working on various contextual algorithms to find you content you’d be interested in based on your online activity. None of them seemed to have survived, though. I’d love to discover articles based on what I’ve been searching on Google, places where I’ve been checking in, links I save to Pocket, etc.
The last one also helps remove the temporal concentration of the content.