Google/Walmart Tie-Up Leaves Data Use and Ownership Unanswered

Google and Walmart have announced a partnership where Google Home users can purchase Walmart’s products using voice ordering. As Recode points out, the intent of the partnership is to blunt Amazon’s initial foray into voice-based ordering. Coming at this from the data and analytics perspective, my first question is what happens to the customer data from, potentially, millions of orders?

Google’s partnership position is clearly more advantageous than Walmart’s. For Google, the data from voice-based ordering is likely to be combined with the existing customer profile it already has and will feed its advertising efforts. Obviously Walmart also gets the order data, but who else? Can Google resell that data to other parties? These details weren’t included in the partnership announcement, but Google’s terms and conditions make it clear that they can use data however it sees fit.

As partnerships between consumer-centric companies proliferate, the questions about who owns customer data and how it is used must become prominent questions for both the companies involved and the impacted consumers. After all, consumers provide the data that drives revenues for companies like Google.

Data Lake Webinar Recap

Last Thursday I presented the webinar “From Pointless to Profitable: Using Data Lakes for Sustainable Analytics Innovation” to about 300 attendees. While we don’t consider webinar polling results valid data for research publication (too many concerns about survey sampling), webinar polls can offer some interesting directional insight.

I asked the audience two questions. First, I asked what the data lake concept meant to them. There were some surprises:
datalake webinar q1

The audience’s expectation for a data lake is as a platform to support self-service BI and analytics (36%), but also as a staging area for downstream analytics platforms (25%). It’s not unreasonable to combine these two together – the functionality for a data lake is largely the same in both cases. The users for each use case differ, as well as the tools, but it’s still the same data lake. A realistic approach is to think of these two use cases as a continuum. Self-service users first identify new or existing data sources that support a new result. Then, those data sources are processed, staged and moved to an optimized analytics platform.

It was reassuring to see smaller groups of respondents considering a data lake for a data warehouse replacement (9%) and as a single source for all operational and analytical workloads (15%). I expected these numbers to be higher based on overall market hype.

The second polling question asked what type of data lake audience members had implemented. Before I get into the results, I have to set some context. My colleague Svetlana Sicular identified three data lake architecture styles (see “Three Architecture Styles for a Useful Data Lake“):

  1. Inflow lake: accommodates a collection of data ingested from many different sources that are disconnected outside the lake but can be used together by being colocated within a single place.
  2. Outflow lake: a landing area for freshly arrived data available for immediate access or via streaming. It employs schema-on-read for the downstream data interpretation and refinement. The outflow data lake is usually not the final destination for the data, but it may keep raw data long term to preserve the context for downstream data stores and applications.
  3. Data science lab: most suitable for data discovery and for developing new advanced analytics models — to increase the organization’s competitive advantage through new insights or innovation.

With that context in place, I asked the audience about their implementation:
datalake webinar q2

63% of respondents have yet implemented a data lake. That’s understandable. After all, they’re listening to a foundational webinar about the concept. The outflow lake was the most common architecture style (15%) and it’s also the type clients are asking about most frequently. Inflow and data science architectural styles tied at 11%.

The audience also asked some excellent questions. Many asked about securing and governing data lakes, a topic I’m hoping to address soon with Andrew White and Merv Adrian.

Five Levels of Streaming Analytics Maturity

Data and analytics leaders are increasingly targeting stream processing and streaming analytics to get faster time to insight on new or existing data sources. Year to date, streaming analytics inquiries from end users have increased 35% over 2016. I expect that trend to continue.
In getting to real-time, these leaders are presented with a range of proprietary commercial products, open source projects and open core products that wrap some existing open source framework. However, in many cases, streaming analytics capabilities are little more than commercially supported open source bundled with some other product. Creating a streaming analytics application is left as an exercise for the buyer.
The challenge is that getting real value from streams of data requires more than just a point solution. Stream analytics is a cross-functional discipline integrating technology, business processes, information governance and business alignment. It’s the difficulty integrating these areas that keeps many organizations from realizing the value of their data in real-time. I’ve been working with my colleague Roy Schulte on a streaming analytics maturity model to help organizations understand what’s required at each maturity level:
In “The Five Levels of Stream Analytics — How Mature Are You?”, we present structured maturity levels for data and analytics leaders to evaluate the current state of their stream analytics capabilities and how to advance their respective organization’s maturity to become smarter, event-driven enterprises. The report is focused on the use of event streams for analytics purposes, with the goal of improving decision making. Gartner clients can download it here.