How I Work (Tools)

I’ve always been a fan of the “How I work” posts on curated sites like Lifehacker. And since Lifehacker isn’t likely to knock on my inbox anytime soon, I figured I’d roll my own. Reading about how people in various professions structure their days and design for productivity or creativity has helped me construct my own strategy. My intent is to keep the conversation going on new tools or methods I might try, and to see if my processes may work for you. This will be a short series, starting with tools.

Hardware

Most of my research and writing happens in my home office, on a 13″ MacBook Pro (late 2015) and 27″ Apple Cinema Display. The display has been showing its age lately, with USB and audio problems. Although I suspect the audio problems are mostly due to some awful Plantronics software. A good chunk of my day is spent on the phone, which is where the Plantronics Savi 700 comes in. My desktop is rounded out with a Logitech Performance MX mouse and Apple keyboard.

Even though the MBP is on the lighter side, I still need the power adapter and mouse when traveling. I’m actively looking to reduce the amount of stuff I travel with. To that end, I recently got the new iPad Pro 10.5″ with Smart Keyboard and pencil. The iPad Pro with iOS 10 is already excellent, but iOS 11 should greatly improve productivity. After a few months with the new iPad, the battery life is excellent and I’m much happier with the Smart Keyboard than I thought I’d be. The Pencil is basically useless for the kinds of tasks I do, but I haven’t fully integrated it into my processes.

I’m still using an iPhone 6s with no plans to upgrade unless something happens to it. I also wear a series 1 Apple Watch, which is mostly just a fitness tracker and timer for whatever’s cooking.

Software

My software toolchain is a bit of a mixed bag. Evernote is an essential component. I’m always clipping web pages or saving PDFs. But Evernote’s PDF annotation capabilities are abysmal (and frequently broken), so I supplement it with PDF Expert.

I rely on the Microsoft Office suite for content creation. I’ve tried G Suite and found it lacking when it comes to niche Office features I’ve come to count on.

Of course, I also use WordPress.

For todos and reminders, I use the iCloud Reminders app. (Hey, I don’t judge you.) I’ve run the gauntlet of OmniFocus, Todoist and a dozen others, but Reminders gives me just enough detail without becoming a distraction. It also syncs across all of my devices – and it’s free.

The Rest

Admittedly, digital reminders don’t motivate me to do things. For that, I go analog. A simple notebook and pen for a trivial bullet journal helps me get things done.

What am I missing? How does your tooling differ? Let me know in the comments.

Google/Walmart Tie-Up Leaves Data Use and Ownership Unanswered

Google and Walmart have announced a partnership where Google Home users can purchase Walmart’s products using voice ordering. As Recode points out, the intent of the partnership is to blunt Amazon’s initial foray into voice-based ordering. Coming at this from the data and analytics perspective, my first question is what happens to the customer data from, potentially, millions of orders?

Google’s partnership position is clearly more advantageous than Walmart’s. For Google, the data from voice-based ordering is likely to be combined with the existing customer profile it already has and will feed its advertising efforts. Obviously Walmart also gets the order data, but who else? Can Google resell that data to other parties? These details weren’t included in the partnership announcement, but Google’s terms and conditions make it clear that they can use data however it sees fit.

As partnerships between consumer-centric companies proliferate, the questions about who owns customer data and how it is used must become prominent questions for both the companies involved and the impacted consumers. After all, consumers provide the data that drives revenues for companies like Google.

Data Lake Webinar Recap

Last Thursday I presented the webinar “From Pointless to Profitable: Using Data Lakes for Sustainable Analytics Innovation” to about 300 attendees. While we don’t consider webinar polling results valid data for research publication (too many concerns about survey sampling), webinar polls can offer some interesting directional insight.

I asked the audience two questions. First, I asked what the data lake concept meant to them. There were some surprises:
datalake webinar q1

The audience’s expectation for a data lake is as a platform to support self-service BI and analytics (36%), but also as a staging area for downstream analytics platforms (25%). It’s not unreasonable to combine these two together – the functionality for a data lake is largely the same in both cases. The users for each use case differ, as well as the tools, but it’s still the same data lake. A realistic approach is to think of these two use cases as a continuum. Self-service users first identify new or existing data sources that support a new result. Then, those data sources are processed, staged and moved to an optimized analytics platform.

It was reassuring to see smaller groups of respondents considering a data lake for a data warehouse replacement (9%) and as a single source for all operational and analytical workloads (15%). I expected these numbers to be higher based on overall market hype.

The second polling question asked what type of data lake audience members had implemented. Before I get into the results, I have to set some context. My colleague Svetlana Sicular identified three data lake architecture styles (see “Three Architecture Styles for a Useful Data Lake“):

  1. Inflow lake: accommodates a collection of data ingested from many different sources that are disconnected outside the lake but can be used together by being colocated within a single place.
  2. Outflow lake: a landing area for freshly arrived data available for immediate access or via streaming. It employs schema-on-read for the downstream data interpretation and refinement. The outflow data lake is usually not the final destination for the data, but it may keep raw data long term to preserve the context for downstream data stores and applications.
  3. Data science lab: most suitable for data discovery and for developing new advanced analytics models — to increase the organization’s competitive advantage through new insights or innovation.

With that context in place, I asked the audience about their implementation:
datalake webinar q2

63% of respondents have yet implemented a data lake. That’s understandable. After all, they’re listening to a foundational webinar about the concept. The outflow lake was the most common architecture style (15%) and it’s also the type clients are asking about most frequently. Inflow and data science architectural styles tied at 11%.

The audience also asked some excellent questions. Many asked about securing and governing data lakes, a topic I’m hoping to address soon with Andrew White and Merv Adrian.

Five Levels of Streaming Analytics Maturity

Data and analytics leaders are increasingly targeting stream processing and streaming analytics to get faster time to insight on new or existing data sources. Year to date, streaming analytics inquiries from end users have increased 35% over 2016. I expect that trend to continue.
In getting to real-time, these leaders are presented with a range of proprietary commercial products, open source projects and open core products that wrap some existing open source framework. However, in many cases, streaming analytics capabilities are little more than commercially supported open source bundled with some other product. Creating a streaming analytics application is left as an exercise for the buyer.
The challenge is that getting real value from streams of data requires more than just a point solution. Stream analytics is a cross-functional discipline integrating technology, business processes, information governance and business alignment. It’s the difficulty integrating these areas that keeps many organizations from realizing the value of their data in real-time. I’ve been working with my colleague Roy Schulte on a streaming analytics maturity model to help organizations understand what’s required at each maturity level:
In “The Five Levels of Stream Analytics — How Mature Are You?”, we present structured maturity levels for data and analytics leaders to evaluate the current state of their stream analytics capabilities and how to advance their respective organization’s maturity to become smarter, event-driven enterprises. The report is focused on the use of event streams for analytics purposes, with the goal of improving decision making. Gartner clients can download it here.