IBM’s TradeLens Highlights Blockchain Ecosystem Challenges

In the early 2000s, there was a lot of hype around B2B portals that would replace expensive EDI (electronic data interchange) infrastructure. I worked on three of them: one in aerospace, another for a specific airline and a third that was meant to be general purpose. The idea was the same: a centralized platform, owned either by a consortium of participants or operated by some third party, would replace EDI with a bunch of XML messages. Sprinkle in some Enterprise Java Beans and let the cash roll in.

Today’s blockchain platforms are telling basically the same story, minus the EJBs. the 2000s-era B2B portals had massive challenges and complexity around technologies, data standards and integration. These are the same technical head winds slowing deployment of blockchain platforms. From a business perspective, those B2B portals also had problems getting other companies to participate. After all, why would I work with a competitor? As recent news for TradeLens (the aspirational IBM/Maersk blockchain product) indicates, ecosystem development remains a core dilemma for blockchain adoption.

Maersk’s competitors don’t want to use a platform that they don’t own, either from a platform or intellectual property perspective. They also don’t want to undertake massive investment over several years for a project that may not work in the end. Lastly, how these platforms will be governed is still an open question.

Ultimately, I believe these grand blockchain platforms meant to unify industries will go the same way as those 2000s-era B2B portals. Some will succeed in extremely limited fashion but most will fail with a whimper. The most common scenario is that large companies, like WalMart or Toyota, will create and operate their own blockchain-based platforms and smaller competitors will create their own centralized consortiums to realize the same benefits. Rather than industry-wide unification, it’s much more likely the status quo will be maintained because the business challenges can’t be resolved.


Hi-Tech Sleep with the Bose Sleepbuds

The Bose Sleepbuds had an interesting development cycle. Instead of creating something entirely in-house, Bose turned to crowdfunding to figure out the interest level of a high-tech audio sleep aid. The experiment was a success and the product quickly sold out on Indiegogo. I recently received a pair as a gift and, after using them for a few nights, have some initial impressions.


The Good

My biggest concern was that the Sleepbuds wouldn’t stay in my ears overnight once I entered a deep sleep. (I tend to start out as a back sleeper, then all bets are off.) I found the Sleepbuds rolled up in a sheet the first morning, but after that they’ve stayed in reliably. They’re comfortable, with soft rubber fittings that come in small, medium and large. Compared to other Bose earbuds I own, the rubber material seems much softer on the Sleepbuds. The cone shape leading into the ear fits snugly and securely. It’s possible to push it too far in which can be irritating after a few minutes. It takes a few uses to figure out the right fit, which isn’t unexpected.

The Sleepbuds aren’t general-purpose earbuds, meaning you can’t listen to just any audio stream. You’re limited to ten sounds available from the iOS or Android app, although Bose claims there are more sounds coming. The sounds are what you’d expect: various white noise and some loops, like campfires and streams. I prefer the white noise options, which I’ll get into later. And because these are linked with an app, you can’t use them standalone. If you’re paranoid about how your data is used, this might be an issue.


Screen Shot 2018-10-18 at 18.10.40 .png

The Sleepbuds do a great job of blocking out sound, but they aren’t noise-canceling. They’re “noise-masking.” If you’re in a loud room or your sleepmate snores loudly, they won’t be enough to give you a quiet night’s sleep, even at maximum volume.

One nice feature I noticed was, once I took the Sleepbuds out of their charging case, they immediately linked with the iPhone app and starting streaming audio. Not needing to navigate to the app and start it was welcome.

The charging case is where the Sleepbuds live when you’re not using them, or when they aren’t lost somewhere in the sheets. I’ve found the case can recharge the Sleepbuds 3-4 times on a single charge before it needs to be plugged in. The five dots indicate the case’s charge level, while the flashing lights by each ‘bud indicate if you’ve successfully placed the ‘buds on their respective magnetic charging base. Overall, Bose did a great job with this part of the design and experience of using the Sleepbuds.

The Bad

Of course, no product is perfect. If you’re a side sleeper, which I sometimes am, no amount of rubber padding will mask the feeling of a plastic marble getting pushed into your ear canal. That’s woken me up a few times since I started using them. Not a show-stopper, but annoying enough to mention.

Then there are the sounds. The white noise sounds, like Circulate and Warm Static, are fine because they’re ignorable. Other sounds, like Rustle and Tranquility, have loops that are too short. If you’re paying attention to them even slightly, it’s easy to pick up where the loop restarts and, if you’re like me, you’ll spend more time listening for the loop than entering sleep.

The Results

The Sleepbuds don’t magically put you to sleep. They will block out a good deal of sound and give you a droning noise to focus on while you attempt to get to sleep. If you’re stressed about the day you’ve had, or the day you’re about to have, you’ll still have to deal with that. At best, the Sleepbuds give you a personal cone of relative silence with small footprint.

At $250, the Bose Sleepbuds are an expensive audio sleep aid, but it’s an aid that works for me. I wouldn’t use them while flying due to the risk of losing one or both in a seat, but  they’re definitely part of my at-home sleep routine now.

Have you tried the Bose Sleepbuds or something similar? Let me know in the comments.

Avoiding Weasel Words in Your Business Writing

My day job as an industry analyst gives me great exposure to all kinds of business writing. Some of it is good. A lot of it isn’t. A common trait of bad business writing is what I call the illusion of action, or giving the appearance that you’re advising or instructing your reader to do something, but the action is either nonexistent or vague. From the content I’ve reviewed, weasel words are a big contributor to weak business writing.

Weasel words are words that avoid taking a position. You likely see them on a daily basis but they don’t catch your eye because you’re used to weak business writing. The weasel words I’m always on the lookout for are:

Assume Believe Consider Expect
Imagine Know Look Monitor
Own Realize Recognize Reflect
Remember Think Understand

Getting away from the business context for a moment, let’s say you’re reading about grilling steaks. When it gets to the part about determining the doneness, the step simple states:
Assess the temperature of your steak for desired doneness. [Bad recommendation]

What does that mean? How do I assess it? By touch? If you’re experienced on the grill, this might make perfect sense to you. But if you’re experienced, it’s unlikely you’re reading the recipe in the first place.

Instead, a weasel-free recommendation might look like:
Use a digital thermometer to check the doneness of your steak. Rare steaks are between 120° and 125°, while medium rare steaks… [Good recommendation]

The good recommendation tells the reader how to do something and, when necessary or available, provides some data supporting or scoping the recommendation.

Let’s Talk About ‘Leverage’

‘Leverage’ is a massively overused word in business writing. I can argue that it’s a weasel word because it is used to avoid taking a position, but it almost always means ‘use.’ You’re better off using the simpler and more direct language. The same is true of ‘utilize.’ Always use the shorter, more direct version to communicate with your audience.

Weasel words are evasive and destroy the value you’re trying to create for your audience. Avoid them by taking a position for your reader. If you find that difficult, you may not know your audience or the topic well enough yet.

A Cognitive Model for Decision-Making with Data Visualizations

Data visualizations increasingly inform our daily decisions. Traffic visualizations inform which route to take to the office, business intelligence dashboards indicate how you’re doing on projects and key performance indicators. And data collected by fitness trackers tell you how close you are (or aren’t) to reaching your weight loss or fitness goals.

Each of these domains (transport, performance, fitness) use different kinds of visualizations and may require different decision processes and frameworks. While there’s been significant research on data visualizations on decision making in isolated domains, there hasn’t been a much research around cross-domain research in an attempt to uncover a common cognitive decision-making framework. That is, until recently.

Earlier this year, team lead by Lace Padilla conducted an analysis of decision-making theories and visualization frameworks and propose an integrated decision-making model

What are Decision-Making Frameworks?

Over the last 30 years, the dominant decision-making theory into how humans make risk-based decisions has been the dual-process theory. In the first process, humans make reflexive, intuitive decisions with little consideration. This is also called Type 1 processing. Type 2 processing is more deliberate and contemplative. The two types of decisions were made famous by Daniel Kahneman in “Thinking, Fast and Slow.” There have also been some proposals that these two types are a gross oversimplication of how the human brain makes decisions and the reality is closer to a spectrum of decision-making, based on required attention and working memory.

Cross-Domain Research Findings

The researchers discovered four findings as part of the review. The first two are impacted by Type 1 processes; the third by Type 2, while the fourth appears to be impacted by both.

Visualizations direct viewers’ bottom-up attention, which can be helpful or detrimental

Things like colors, edges, lines and other foreground information can cause involuntary shifts in attention (bottom-up attention). This may cause viewers of a visualization to focus on things like icons while missing task-relevant information. In one example, reproduced from the original document, some viewers were willing to pay $125 more for tires when viewing the visualizations versus viewing a textual representation.


Bottom-up attention has a significant influence on decision-making, but it’s also a Type 1 task that likely influences the initial decision-making process.

Visual encoding techniques prompt visual-spatial biases

How a visualization is presented can trigger biases. One example is using semi-opaque overlays on a map to indicate user location on a map. Representing the probable location as a blurred area produced different decisions than fixed probability area, depicted below:

Screen Shot 2018-10-06 at 17.32.07

Like the previous finding, these visual-spatial biases are a Type 1 process occurring automatically.

Visualizations that have a better cognitive fit result in faster and more effective decisions

“Cognitive fit” describes the alignment between the task or question and the visualization. In other words, is the visualization formatted in such a way that it facilitates answering the question being asked. The researchers used the example of finding the most significant members of a social media network. When the graph was formatted in a way that didn’t facilitate the task, participants with less working memory capacity performed the task more slowly than those with greater working memory. When using a visualization optimized for the task, there was no difference in task completion times.

Knowledge-driven processes can interact with the effects of the encoding technique

The last finding is that the knowledge that a person possesses can impact how the visualization is used, triggering biases or allowing viewers to use existing expertise. Knowledge might be temporarily stored in working memory or held in long-term memory and used with some effort (both Type 2), or stored in long-term memory and automatically used (Type 1).

The Cross-Domain Model

The model the researchers developed adds working memory to a previously existing model of visualization comprehension. Working memory can influence every step in the decision-making processe, except bottom-up attention.

Screen Shot 2018-10-06 at 17.48.13


As part of their review and the previously depicted cross-domain model, the researchers created several recommendations for data visualization designers:

  • Create visualizations that identify the critical information needed for a task and using visual encoding techniques to direct attention to that information.
  • Use a saliency algorithm to determine the elements in a visualization that will likely attract viewers’ attention.
  • Try to create visualizations that align to a viewer’s mental “schema” and task demands.
  • Ensure cognitive fit by reducing the number of mental transformations required in the decision-making process.

Overall, this is excellent work that should be top of mind for anyone using and presenting data visualizations to decision-makers.

Book Review: ‘How Asia Works’ by Joe Studwell


“…in a functioning society markets are shaped and re-shaped by political power”

During my undergrad, one of the most enjoyable classes I took was how to develop emerging economies. The documented progression of economies from agriculture to manufacturing was fascinating, but it was only a 300-level course and it was short on details. I found Joe Studwell’s “How Asia Works” on some recommended book list and promptly added it to my Kindle.

How Asia Works” is a detailed look at the economic history of South Korea, Taiwan, the Philippines and Malaysia. Two are economic standouts and two have yet to meaningfully  reach any kind of economic escape velocity. The topics of agricultural reform and development, manufacturing and the liberalization of financial markets each get a detailed chapter while China’s success is explored last.

Studwell has created an excellently researched book and he delivers a level of detail without chapters feeling bogged down. For the topic (econ history is usually impossibly dry), the book reads well. That said, the excessive length of the individual chapters makes the book a bit of a slog to get through. Despite the quality of the writing, 400 pages still reads like 800.

What I found surprising about the history of the successes and failures was the role of government policymaking in shaping these economies. Enforced land reform, protectionism, a government-led focus on exports, technology mastery, and slow deregulation of respective financial markets are the characteristics of the winners. Studwell links these policies to similar developing economies, including the United States’ early development, which influenced the economies of Germany and Meiji Japan.

Studwell makes an excellent case that no significant economy has developed from free trade and deregulation from the outset. Proactive interventions, starting with agriculture and then in manufacturing, drive accumulation of capital and technological mastery. There doesn’t appear to be a way for countries to bypass these essential steps.

In short, I highly recommend this book if you’re interested in the history of economic development and how those lessons translate to today.

Resolving the Presenter’s Paradox

Deciding what information to include in a presentation is a challenge everyone faces. From the presenter’s perspective, every fact that supports the presentation objective has some value. These might be case studies, data points, primary research, or other elements that drive the point home. Some facts, like primary research studies, might have a high impact while others, like anecdotes and informal stories, have less impact. Regardless of the weight of the information, presenters believe including all favorable information improves how audiences receive and evaluate the content. Presenters believe this creates an additive effect, roughly depicted below.


Unfortunately, this isn’t how audiences evaluate content. Information with less impact dilutes more impactful information. Rather than an additive view, audiences take an averaging approach. As a presenter, you might think you’re throughly convincing the audience by including every snippet of data, but the audience experiences it differently:


You might experience this when watching a movie. As an observer, your evaluation is based on the entire movie. If the story is captivating but falls apart in the last act, you’re likely to rate the movie less positively even though most of the move was excellent. Another example might be an offer to purchase a new smartphone on its own, or purchase a slightly more expensive bundle that includes low quality headphones. In comparing the two offers, the low quality components reduce the desirability of the bundle relative to just buying the smartphone. This focusing on the big picture instead of individual components is called holistic processing.

Presenters generally fail to recognize holistic processing because their objectives are different from the evaluators. Evaluators assess the entire presentation, while presenters build presentations from individual components which become their own objects of attention. This happens largely because presenters create content using a bottom-up, rather than a top-down approach.

Recommendations for presenters:
  • Build your storyline first, then support it with only the most relevant facts. Avoid the bottom-up approach whenever possible.
  • Evaluate potential information in the context of the overall story rather than discretely. Moderately impactful information will dilute the impact of highly impactful information.
  • Choose the right information for your audience and message. Growth-centric presentations should avoid information on risk and loss, while prevention-centric presentations should highlight it.



The Presenter’s Paradox. Weaver, Garcia, Schwarz. 2012.

Are Chinese Companies Reading Employee Emotions?

On April 30th, South China Morning Post reported that Chinese companies are using brain-reading technology to detect the emotional state of workers. The article was short on details but long on effectiveness claims. If you missed it, the device looks like this:

china neural cap

The device appears to fit directly into the uniform hat or helmet, but doesn’t feature a “wet” connection in the form of electrodes. It’s possible the inside curve touches the head, which provides the data feed, but it’s unlikely the device will provide useful diagnostic information. Even less likely is that the data will let employers understand the emotional state of its employees. Data collected using traditional EEGs only provide basic data, and that requires calibration.

That hasn’t stopped the State Grid Zhejiang Electric Power from claiming the technology resulted in a profit increase of $315M USD since its introduction. What’s more likely is that employees, aware of the monitoring, are simply working harder because they’re afraid of losing their jobs. This isn’t sustainable. Possible outcomes include increased stress, employee burnout, and, potentially, workplace accidents.

This isn’t the only place where the Chinese surveillance state is pushing its citizens. In another story from Hangzhou (Hangzhou seems to be surveillance capital of China), schools are using facial recognition technology to ensure children are paying attention. Again, the likelihood this technology is doing what it advertises is vanishingly small, but the societal impact will be real.