How I Work (Productivity)

This is the second in a series of posts about how I do my day job. You can find the first post here: How I Work (Tools)

At this point, I feel like I’ve tried every available productivity tool and method. I still experiment when I see something new, but I’ve finally refined my process for getting stuff done on a day-to-day basis. There are several pieces, but each is generally simple on its own. Actually, the whole process is simple. Otherwise I wouldn’t follow it.

Project-Based Planning

Today, my go-to for planning projects is the iOS/macOS Reminders app. It doesn’t have a lot of features, but it syncs across my devices and prompts me with annoying notifications when I’m behind on deadlines. I’ve tried things like Todoist, and spent weeks trying to get OmniFocus integrated into my workflow, but I didn’t have the patience to either adjust how I worked to meet the limitations of the software or spend weeks customizing it. Ad hoc projects also land on my plate on a regular basis. I needed something easy and fluid to adapt to that. Lastly, I’m not going to pay for complexity when simplicity is free.

In Reminders, each project I’m working on gets its own list of deliverables, and each deliverable has a priority and due date. If it’s a publishing or presentation project, I also create a notebook in Evernote to store web clippings, notes, PDFs, etc. When a project is completed, the Reminders list is deleted and the Evernote notebook goes into an archived notebook stack. Why don’t I use Evernote’s reminders instead? Because they’re impossible to find across devices. (For such critical component in the way I work, Evernote is a disappointing piece of software.)

The Reminders app is really a staging area for everything that I have to get done, but it can be overwhelming to see everything at once. That’s when I use a simplified bullet journal.

Bullet Journal for Daily Processing

Each morning follows roughly the same pattern. I look through the list projects and see what’s languishing and add the next project-specific deliverable in the list to a notebook  – with actual paper and pen. I might add 3-4 work-related things and 1-2 things around the house I need to get done (clean the litter boxes? yay!). I don’t add more because 1) I know I likely won’t get that far and 2) something else is always waiting in my inbox.

While there are certainly examples of elaborate bullet journals, mine is a simple list of the day’s tasks with boxes to the left. Completed tasks get an ‘x.’ Things that I didn’t complete get an arrow indicating a carry-over to the next day. Sometimes things don’t go my way and I end up carrying things over for days at a time.

Aggressive Time-Boxing for Individual Tasks

This last part is the most recent addition to my productivity process. I received an Esington pomodoro timer as a gift, which forced me to learn about the Pomodoro Technique. Pomodoro is a simplified time management method in which you work for 25 minutes at a time, then take a short break. That’s it. With the 25-minute timer in front of me, it’s easier to avoid distractions and focus on the task at hand. Add some noise canceling headphones, and I’m set.

Why This Works for Me

With hundreds of productivity methods and best practices out there, I find this simple method works for me because:

It’s not overly digital. Notifications flashing on my phone and other screens don’t create a sense of urgency for me. The digital parts are just there to store tasks  until I add them to the treeware notebook. Writing things down and crossing them off gives a sense of satisfaction that checking off a digital box doesn’t. And the physical act of flipping over a 25-minute timer helps me focus in a way that a timer on my phone doesn’t.

It’s simple. Many productivity methods, like GTD IMO, focus on the method instead of the result. Often, they’re so intricate and rigid that they fail to reflect the messy reality of most peoples’ work lives. My cobbled together method may not look pretty or win any awards, but it doesn’t have to. It only has to help me get stuff done.

Does this sound like your productivity method? Did you get OmniFocus to work for you? (If you did, I’d like to know how.) Let me know in the comments.

Data Lake Webinar Recap

Last Thursday I presented the webinar “From Pointless to Profitable: Using Data Lakes for Sustainable Analytics Innovation” to about 300 attendees. While we don’t consider webinar polling results valid data for research publication (too many concerns about survey sampling), webinar polls can offer some interesting directional insight.

I asked the audience two questions. First, I asked what the data lake concept meant to them. There were some surprises:
datalake webinar q1

The audience’s expectation for a data lake is as a platform to support self-service BI and analytics (36%), but also as a staging area for downstream analytics platforms (25%). It’s not unreasonable to combine these two together – the functionality for a data lake is largely the same in both cases. The users for each use case differ, as well as the tools, but it’s still the same data lake. A realistic approach is to think of these two use cases as a continuum. Self-service users first identify new or existing data sources that support a new result. Then, those data sources are processed, staged and moved to an optimized analytics platform.

It was reassuring to see smaller groups of respondents considering a data lake for a data warehouse replacement (9%) and as a single source for all operational and analytical workloads (15%). I expected these numbers to be higher based on overall market hype.

The second polling question asked what type of data lake audience members had implemented. Before I get into the results, I have to set some context. My colleague Svetlana Sicular identified three data lake architecture styles (see “Three Architecture Styles for a Useful Data Lake“):

  1. Inflow lake: accommodates a collection of data ingested from many different sources that are disconnected outside the lake but can be used together by being colocated within a single place.
  2. Outflow lake: a landing area for freshly arrived data available for immediate access or via streaming. It employs schema-on-read for the downstream data interpretation and refinement. The outflow data lake is usually not the final destination for the data, but it may keep raw data long term to preserve the context for downstream data stores and applications.
  3. Data science lab: most suitable for data discovery and for developing new advanced analytics models — to increase the organization’s competitive advantage through new insights or innovation.

With that context in place, I asked the audience about their implementation:
datalake webinar q2

63% of respondents have yet implemented a data lake. That’s understandable. After all, they’re listening to a foundational webinar about the concept. The outflow lake was the most common architecture style (15%) and it’s also the type clients are asking about most frequently. Inflow and data science architectural styles tied at 11%.

The audience also asked some excellent questions. Many asked about securing and governing data lakes, a topic I’m hoping to address soon with Andrew White and Merv Adrian.

Five Levels of Streaming Analytics Maturity

Data and analytics leaders are increasingly targeting stream processing and streaming analytics to get faster time to insight on new or existing data sources. Year to date, streaming analytics inquiries from end users have increased 35% over 2016. I expect that trend to continue.
In getting to real-time, these leaders are presented with a range of proprietary commercial products, open source projects and open core products that wrap some existing open source framework. However, in many cases, streaming analytics capabilities are little more than commercially supported open source bundled with some other product. Creating a streaming analytics application is left as an exercise for the buyer.
The challenge is that getting real value from streams of data requires more than just a point solution. Stream analytics is a cross-functional discipline integrating technology, business processes, information governance and business alignment. It’s the difficulty integrating these areas that keeps many organizations from realizing the value of their data in real-time. I’ve been working with my colleague Roy Schulte on a streaming analytics maturity model to help organizations understand what’s required at each maturity level:
In “The Five Levels of Stream Analytics — How Mature Are You?”, we present structured maturity levels for data and analytics leaders to evaluate the current state of their stream analytics capabilities and how to advance their respective organization’s maturity to become smarter, event-driven enterprises. The report is focused on the use of event streams for analytics purposes, with the goal of improving decision making. Gartner clients can download it here.