cæstudy
2406-01

Strategic Ecosystem Positioning in Product Development

PUBLISHED
timeline June 2024
client Tech Punditry
deliverable Medium Article
Product Dev AI Product LLMS Ecosystem Strategy Medium

Originally published on Medium - "A Beautiful Sherlock, June 2024". Adapted here with insights and a 2025 followup.

It happened to me

Chatting with a “Sherlocked” assistant, via Messages app on macOS

To be clear, it hadn't escaped me that this was an inevitability. What was elegant, however, was the breadth and depth with which Apple finally addressed the whole “what’s Apple doing in AI?” question. Their address was arguably more complete than what we’d seen demonstrated by the rest of the tech giants to date. And, as Apple has been wont to establish before really diving into any product category with an offering, they’ve paired their introduction of Apple Intelligence with a well integrated, if not explicitly articulated, commercial strategy (we see you, Apple devices!).

All this excitement

During the past year and a half, as public awareness of generative AI exploded, I was excited to immerse myself, up-skill, and experiment with this and other AI projects. However, I had lingering questions about how it would work “in the real world.” While they have been lightly acknowledged in the offerings of others, the deeper questions around access, privacy, and security were conveniently dismissed for a later time or altogether ignored (look over here, it’s “AI-powered”!). Let’s also put aside for a moment all the ethical questions and conversations around P-doom.

Meanwhile a flurry of work, development, and commercialization took place around new-for-now skills, products, and product building blocks (not MECE): prompt engineering, chatbots, fine-tuning, integration and retrieval augmented generation, data analysis and predictive analytics, office productivity, workflow efficiency, creativity, … etc.

Context Switching

Here it’s important, to provide some background to my thesis — to recall a previous statement I made around context switching being one of the last user experience hurdles when it comes to seamlessly integrating computing into our everyday workflow (reference the Github to my sherlocked project).

You're breakin' my stride

The idea was to develop a proof-of-concept ambient AI-assistant that could be accessed from any device, with full personal, historical activity and context. It should sink into the background, and not be “a thing” in and of itself. Ideally, I would’ve loved to have been able to augment Siri. So moving in the direction of Messages on iOS/macOS was the next best option because it was a less onerous context-switch (read: app). It has near-zero friction in the sense that it’s used all the time, it has a simple interface via text, a built in though less frequently used voice input capability, and its database of conversational history was an exciting proxy for a lot of the personal context I was referring to. Also, these are things that wouldn’t need to be built into the chrome and feature set of a brand new assistant app. It was the idea of finally being able to text conversationally with Siri to do all the things, with all of my data and documents, on all of the ecosystem’s apps, on all of the ecosystem’s devices!

But as I developed this multi-modal, multi-functional assistant, the questions of privacy, security, authentication-scope, attack-surface exposure, etc… they all kept creeping into the problem set.

Now I'm getting in the way

Isn't that convenient

So, it’s come to pass once again that because of its ecosystem of integrated hardware and services, Apple has been able to present a more coherent architecture where the prickly issues arising from “AI-everywhere” have been more directly addressed. While my assistant takes risks jumping through hoops (e.g., getting forced down the path of using sudoer, for TWK), Apple’s AI integration interface, App Intents, are natively built-into their ecosystem’s security fabric.

Keychain and device biometrics are an already present and relatively frictionless authenticator that facilitates handshakes and negotiations between the multitude of apps and subscription services required to complete an AI user journey. To prevent man-in-the-middle attacks and injections, their new “blockchain-ish” mechanism, as described in their keynote followups, puts an often-derided technology to good use.

iCloud conveniently provides the relationships from which additional context, and security could be implied, and to which scope could be limited and/or extended. And when more horsepower is required, a new “private cloud compute” facility, powered by Apple silicon, will provide a securely partitioned enclave from which all data is deleted upon task completion.

Yes, that IS convenient!

When combined with on-device and in-ecosystem data plus the managed contacts which provide personal context (they call it an “on-device semantic index”), it almost seems as if this was the plan all along. A plan that may have just Sherlocked a whole class of AI-powered, 3rd-party, stand-alone, commercially-ready-and-available products, leveraging competing standards and AI-building-block technologies. They’ve been renting space on Apple’s platform, which has left their integrations lacking. Or they’ve been trying to create their own, working in futility against scale and network effects.

Scale also matters

A privacy oriented elephant is a forgetful one?

In their approach, Apple is not alone. While they’ve demonstrated a strong lead, there are others to watch. Microsoft is an obvious one with their direction to integrate CoPilot in software and hardware, though even their efforts around the latest Recall feature were not nearly as elegant and well thought out — so much so that they’ve had to roll it back, for now (I missed an opportunity for a pun there, somewhere).

Google is another player along with the myriad providers of large language model and assistant technologies like OpenAI, Anthropic, Meta, and Mistral. Multi-modality and enhanced search was their latest collective breakthrough. But their lead, differentiation, and ultimate role remains an open question. To wit, the Apple/OpenAI partnership news was a bit of a red herring. For what’s really in an LLM that is presented as a fall-back option, and that can be readily and easily swapped?

Amongst this, there are also the perpetual questions and debates around open-ness vs walled-gardens. While I’ll reserve judgement and already hear clamouring from the “expert-user” crowd, a case is again emerging for the latter.

Slow Clap

So, I kinda have to give up that slow clap to Apple for the beautiful Sherlock.

To the casual consumer observer the keynote may present itself as nothing more than flashy marketing of gimmicky Siri, Messages, and Photos features. And there will be the tech pundits calling this a “lock-in”, a shallow play for device revenues, and saying that it’s about time that Siri were fixed and upgraded (I agree, it is!). I’m also aware that this piece can be fairly categorized as Apple fanboy-ism and/or tech punditry except I’m not the one that’s commercially incentivized to whip up discourse on anything Apple.

But for those experimenting, developing, and integrating AI into products and experiences, Apple’s move lays out a vision for AI integration. For now, it seems the most comprehensive and coherent of what’s been proposed and presented so far. Their coordination simply eclipses other efforts.

To be fair, those apps with scale can probably refine their offering and/or pivot to using App Intents, but that will only serve to further support Apple’s vision. I’ll continue to push forward with my assistant. At the very least, it’s a demonstration of how it can operate on older hardware and lesser-enabled platforms. And it’ll force me to re-think or re-inforce some of my own positions about context switching and ambient AI’s.

But, for the industry — that murderous, beautiful Sherlock has returned. And it hurt so good.

Case closed?