145 stories
·
5 followers

★ Is Chrome Even a Sellable Asset?

1 Comment

There are two ways to consider a forced divestiture of Chrome by Google, as the U.S. Department of Justice has, for months now, been requesting after Judge Amit P. Mehta ruled that Google has illegally maintained its monopoly in web search. One is from a business perspective (which I believe is the only perspective considered by the DOJ). The other is from a technical perspective. I don’t think either makes any sense. I’m not talking about whether it’s fair or just that Google be forced to sell Chrome. I’m talking about whether it’s even possible in any practical sense.

The Business Perspective

The whole premise that forcing Google to sell Chrome would be an appropriate remedy for their illegal monopolizing is predicated on the notion that Chrome, in and of itself, is a valuable asset. Here’s an article from Bloomberg reporters Leah Nylen and Josh Sisco that asserts in its headline “Google’s Chrome Worth Up to $20 Billion If Judge Orders Sale”. Their source for this valuation, which, again, they simply state as fact in their own headline, is a “Bloomberg Intelligence analyst”:

Should a sale proceed, Chrome would be worth “at least $15-$20 billion, given it has over 3 billion monthly active users,” said Bloomberg Intelligence analyst Mandeep Singh.

The price prospective buyers are willing to pay may depend on their ability to link Chrome to other services, said Bob O’Donnell of TECHnalysis Research. “It’s not directly monetizable,” he said. “It serves as a gateway to other things. It’s not clear how you measure that from a pure revenue-generating perspective.”

3 billion users = $15–$20 billion is not real math. It’s just bullshit. The users are only valuable right now because they perform a lot of Google web searches within Chrome. Chrome users also make money for Google by using other Google properties that show ads, like Maps and Gmail. And Chrome encourages users, in general, to use Google properties and services like Docs. If you try to work out how valuable Chrome is to Google, it’s seemingly worth a veritable fortune. But that doesn’t mean Chrome holds any value of its own, on its own.1

Google also makes money from showing search ads to users of other browsers, like Safari and Firefox, but with those browsers Google pays traffic acquisition fees to Apple and Mozilla (respectively). In 2021 those fees amounted to over $26 billion, almost $20 billion of which went to Apple alone. David Pierce, writing for The Verge in 2021:

Just to put that $26.3 billion in context: Alphabet, Google’s parent company, announced in its recent earnings report that Google Search ad business brought in about $44 billion over the last three months and about $165 billion in the last year. Its entire ad business — which also includes YouTube ads — made a bit under $90 billion in profit. This is all back-of-the-napkin math, but essentially, Google is giving up about 16 percent of its search revenue and about 29 percent of its profit to those distribution deals.

A key point to remember is that Google doesn’t pay Apple or Mozilla to make Google the default search engine in Safari and Firefox. They pay Apple and Mozilla per search that goes to Google from those browsers. It may or may not be in their contracts that Apple and Mozilla will make Google the default search engine in their browsers, but even if it is, that’s not what Google is paying for. They pay per search. It seems widely understood that one of the remedies that will come out of the U.S. v. Google verdict is that Google will be banned from any agreements that make Google search the default in other browsers. But I think it’s pretty clear how that will play out.

Option (a) would be that Apple (and Mozilla, and Samsung, and the handful of other companies that make browsers with sizable market share who currently set Google search as the default2) continue to make Google the default for search, even though it would no longer be a contractual demand in the Traffic Acquisition Cost (TAC) agreement between Google and the browser maker. In other words, right now, I think the contract between Google and Apple for TAC is currently like this:

Google will pay Apple $X per web search that goes to Google from Safari, and Apple will make Google the default search in Safari.

After the dust settles on the DOJ case against Google, it might look like this:

Google will pay Apple $X per web search that goes to Google from Safari, but Apple is under no obligation to make Google the default search in Safari.

And then Apple will simply choose to keep Google as the default search in Safari, and the TAC payments will continue to flow unabated. Same for Mozilla and Samsung and any other browser with Google as the current default. The money is good, Google is still considered the best general purpose web search engine, and users expect those searches to go through Google.

Even if Google is somehow forbidden from accepting default search engine placement in other browsers, I don’t think it would change the TAC situation. But such a ruling would be weird, right? It’s Google that lost a major antitrust lawsuit and now faces a remedy reckoning, so it seems reasonable that Google might be forbidden from any contract that requires Google search to be the search default in another browser. Apple didn’t lose an antitrust case. (Yet?) Mozilla certainly didn’t. So how could Apple or Mozilla be forbidden from choosing, of their own volition, to keep Google as the default search engine in their browsers? But even if they were, they wouldn’t switch the default search in Safari and Mozilla to Bing or DuckDuckGo or whatever. They’d have no default search at all, and instead present a choice screen to new users, with Google as one of the handful of options, and the overwhelming number of users would pick Google, and very little would change. The DMA requires these choice screens in the EU and Google search still has over 90 percent share there. It’s hard to fathom a US court ruling that browser makers aren’t allowed even to offer Google search as an option for built-in search. (Even the EU didn’t do that.)

It would seem even more punishing to Apple and Mozilla and Samsung et al if the DOJ attempted to prevent Google from making TAC payments to browser makers, period. In that scenario Google would just get to keep all the money they’re currently paying to those companies for the traffic — it would be a reward to Google and a punishment to Apple. (And possibly a death sentence for Mozilla.)

With Chrome, Google gets to show users ads without paying any sort of traffic acquisition fee to the browser maker, because they’re the browser maker. Chrome is extremely profitable for Google not because it makes any money on its own, but because every Google search that starts in Chrome is a search Google doesn’t have to pay a TAC fee for.

If Google were forced to sell Chrome, and found a buyer, presumably the entire appeal to the buyer would be that they’d start collecting those TAC fees from Google, just like Apple does with Safari.

Here’s MG Siegler, spitballing last year on who might possibly buy Chrome, if the U.S. Department of Justice is successful in its attempt to force Google to divest it:

It’s not clear who could pay what for Chrome. Bloomberg throws out the notion of OpenAI being one potential home, but would the government really want that? That would risk anointing — well, really entrenching — a king in a new field. OpenAI’s main benefactor, Microsoft, could acquire it, especially now that their own Edge browser is all-in on Chromium. But they would probably just use it to bolster not just Bing but also their own AI products and services. And that would be extremely awkward for the government as well.

Apple wouldn’t want Chrome and shouldn’t be allowed to buy it for obvious reasons. Mozilla has built Firefox on completely different technologies, but with that company in some amount of peril, perhaps it would be worth the “hail mary” — but could they possibly afford it? And honestly, what would they really do with it? They famously don’t have their own search engine. And their AI work is nascent at best. So they buy Chrome and strike a deal with Bing or DuckDuckGo? Does anyone want such a Frankenstein product? Same story with Opera, etc.

It’s hard to come up with a buyer who could afford to pay a high price for Chrome and who would pass regulatory muster as its new owner. And if Chrome is not worth a high price, or simply isn’t sellable at one because there’s no plausible buyer, then why is the DOJ trying to force Google to sell it? They might as well try to force Google to sell the two o’s from its name.

The Technical Perspective

Ryan Whitwam, writing for Ars Technica just last week, “OpenAI Wants to Buy Chrome and Make It an ‘AI-First’” Experience”:

The remedy phase of Google’s antitrust trial is underway, with the government angling to realign Google’s business after the company was ruled a search monopolist. The Department of Justice is seeking a plethora of penalties, but perhaps none as severe as forcing Google to sell Chrome. But who would buy it? An OpenAI executive says his employer would be interested. Among the DOJ’s witnesses on the second day of the trial was Nick Turley, head of product for ChatGPT at OpenAI.

While Judge Amit Mehta has expressed some skepticism about the DOJ’s proposal to divest Chrome, the government claims the browser is core to Google’s anticompetitive conduct. Further, the DOJ team believes that selling Chrome would level the online playing field, but it has not been clear who would buy the browser.

According to Turley, OpenAI would throw its proverbial hat in the ring if Google had to sell. When asked if OpenAI would want Chrome, he was unequivocal. “Yes, we would, as would many other parties,” Turley said.

OpenAI has reportedly considered building its own Chromium-based browser to compete with Chrome. Several months ago, the company hired former Google developers Ben Goodger and Darin Fisher, both of whom worked to bring Chrome to market.

This is the aspect of the US case against Google that most shows the DOJ has little real idea how anything actually works in tech. The non-Google aspects of Chrome are completely open source. No need for dick quotes around the “open” there. Just go to the Chromium project and download the code, which includes all of Blink, Chromium’s web engine that Google forked from WebKit in 2013. There’s even an open source project called Ungoogled Chromium that delivers a completely Google-free Chromium experience. Everything about Chromium, the browser app, looks and feels like Chrome. Except it doesn’t have any of the integration with Google’s web services and your Google account(s).

There are dozens of for-profit browsers built from the Chromium code base. Microsoft’s Edge. Brave. Vivaldi. Even the venerable Opera — a browser that first debuted in 1994! — became a forked version of Chromium a decade ago.

We know the value of a Google-free version of Chrome. Nothing. Zero. You can install and use that browser today, or even modify and compile its source code, free of charge. And if a commercial entity wants to take that base and build its own proprietary layer on top of that, they can do it. Microsoft and Brave and the others already have. And we know how popular those browsers are — which is not very popular at all.

If, back in the late 1990s, Microsoft had been forced to sell off its Office suite of apps, or split into two separate companies, a Windows/OS company and an Office/apps company, you can see how there would have been value in both entities. Windows generated (and still generates) a lot of revenue on its own. Office generated (and still generates) a lot of revenue on its own. There was also tremendous technical value in the closed source code to both “divisions”. There’s no value like that at all with Chrome, independent of Google as a whole.

What has value are the billions of users using the actual Chrome from Google. All of those users could be using Edge or Brave or Vivaldi — or just plain Chromium — instead. They’d be getting the exact same rendering engine and the exact same basic browser user interface. But they’re not. They’re using Chrome. For chrissake Microsoft still owns and controls Windows and has made Edge — which I repeat is just a fork of Chromium — Windows’s default browser and Edge has just 14% desktop market share and Chrome has 66%.

The DOJ can’t force Google to sell Chrome’s user base because they’re not Chrome users, per se, they’re Google users. In practical terms what the DOJ is asking is for Google to be forced to shut Chrome down, and then I guess sell off the husk of its remains. Chrome does hold incredible value, but that value is inherent to Google and to Google/Chrome’s users. It’s not a standalone product with any commercial value whatsoever. It’s just a software layer between Google and its users.

The more I think about it, the more it looks to me like a complete fantasy that Google even could sell Chrome. It would be at least a somewhat different situation if Chrome were mostly closed source. But it’s not. In fact, it’s the opposite — it’s almost entirely open source. So what even is there to buy?


  1. Given that Safari generates over $20 billion in revenue for Apple annually, almost all of it in TAC fees from Google, and that surely almost all of that revenue is profit, and that Chrome has more than 3× Safari’s global web browser market share (across all devices, desktop and mobile), surely Chrome saves Google at least, say, $20-30 billion in TAC fees that Google would be paying to another company if some other company owned Chrome. If Apple generates $20 billion in profit from TAC fees for Safari, surely Chrome would generate at least as much, if not more, for a hypothetical buyer of Chrome who somehow managed to keep Chrome, under its ownership, as popular as Chrome is under Google’s. But so that would mean Chrome, as a purchasable asset, would surely be worth far, far more than $20 billion. If you valued Chrome at 10× revenue, that would mean it’s worth like $200-300 billion. But of course it’s not worth that much as a standalone entity, because it would never work out that a new owner could keep Chrome as popular as it is today, as an integrated Google product.

    As a spitball thought exercise, consider what Safari is worth, if Apple were forced to sell it. We know Safari generates $20 billion per year. But Safari doesn’t generate that money because Safari is Safari. It generates that money because Safari is the integrated default browser on iPhones, iPads, and Macs. Safari is extremely valuable to Apple as an integrated part of Apple’s platforms, but it would hold relatively no value at all as an independent standalone web browser. It’s like trying to ask what the Apple logo is worth, on its own. ↩︎︎

  2. The only popular browser that ships with something other than Google as the built-in default for web search is Microsoft Edge, which, of course, defaults to Bing. Statcounter pegs Edge’s global market share at 5 percent overall, and 14 percent on desktop. That’s a minority share, to be sure, but it’s something. Statcounter puts MacOS’s share of the desktop market at 15 percent. So about the same percentage of desktop users are using Edge as their web browser as there are using Macs as their computer. There is a version of Edge for Mac, but there is very little overlap between the 15 percent of desktop users on Macs and the 14 percent of desktop users using Edge as their browser. ↩︎

Read the whole story
jheiss
11 days ago
reply
What about adopting the credit card model, where browser vendors kick back a chunk of the TAC fees to the user?
Share this story
Delete

They’re gonna do a season three of The Lord of the Rings:...

1 Comment
They’re gonna do a season three of The Lord of the Rings: The Rings of Power. I enjoyed both seasons, but the second one was definitely better.

💬 Join the discussion on kottke.org

Read the whole story
jheiss
35 days ago
reply
Did Jason watch a different show than the rest of us?
Share this story
Delete

MacOS 15 Sequoia’s Annoying-as-Hell ‘Turn On Reactions’ Menu Bar Prompt

1 Comment

Matt Birchler:

I really thought that the screen recording notifications in macOS Sequoia would be the bane of my existence, but thankfully those have been changed quite a bit from the early betas last summer and they’re totally a non-issue in my book today. However, these god damned “turn on reactions” alerts have got to die in a fire, and they need to have done it yesterday.

I understand why Apple decided to show this once. Why though, is it seemingly designed to reappear every time I start a video call? Who is not annoyed by this?

Read the whole story
jheiss
40 days ago
reply
That's an OS thing? I assumed it was something stupid Google Meet was doing. Either way it is annoying, but also I've mostly ignored it because _I'm in the middle of starting a call_ and don't need shit popping up on my screen.
trekkie
33 days ago
To my knowledge in some areas it’s a real legal thing with implications. Doing IT stuff when Skype for Business added calls it was a Big Deal in two-party consent areas to no deal at all in other areas.
Share this story
Delete

★ Something Is Rotten in the State of Cupertino

3 Comments

In the two decades I’ve been in this racket, I’ve never been angrier at myself for missing a story than I am about Apple’s announcement on Friday that the “more personalized Siri” features of Apple Intelligence, scheduled to appear between now and WWDC, would be delayed until “the coming year”.

I should have my head examined.

This announcement dropped as a surprise, and certainly took me by surprise to some extent, but it was all there from the start. I should have been pointing out red flags starting back at WWDC last year, and I am embarrassed and sorry that I didn’t see what should have been very clear to me from the start.

How I missed this is twofold. First, I’d been lulled into complacency by Apple’s track record of consistently shipping pre-announced products and features. Their record in that regard wasn’t perfect, but the exceptions tended to be around the edges. (Nobody was particularly clamoring for Apple to make a multi-device inductive charging mat, so it never generated too much controversy when AirPower turned out to be a complete bust.) Second, I was foolishly distracted by the “Apple Intelligence” brand umbrella. It’s a fine idea for Apple to brand its AI features under an umbrella term like that, similar to how a bunch of disparate features that allow different Apple devices to interoperate are under the “Continuity” umbrella. But there’s no such thing, technically speaking, as “Continuity”. It’s not like there’s an Xcode project inside Apple named Continuity.xcodeproj, and all the code that supports everything from AirDrop to Sidecar to iPhone Mirroring to clipboard sharing is all implemented in the same framework of code. It’s a marketing term, but a useful one — it helps Apple explain the features, and helps users understand them.

The same goes for “Apple Intelligence”. It doesn’t exist as a single thing or project. It’s a marketing term for a collection of features, apps, and services. Putting it all under a single obvious, easily remembered — and easily promoted — name makes it easier for users to understand that Apple is launching a new initiative. It also makes it easier for Apple to just say “These are the devices that qualify for all of these features, and other devices — older ones, less expensive ones — get none of them.

Let’s say Apple were to quietly abandon the dumb Image Playground app next year. It just disappears from iOS 19 and MacOS 16. That would just be Apple eliminating a silly app that almost no one uses or should use. That wouldn’t be a setback or rollback of “Apple Intelligence”. I would actually argue that axing Image Playground would improve Apple Intelligence; its mere existence greatly lowers the expectations for how good the whole thing is.1

What I mean by that is that it was clear to me from the WWDC keynote onward that some of the features and aspects of Apple Intelligence were more ambitious than others. Some were downright trivial; others were proposing to redefine how we will do our jobs and interact with our most-used devices. That was clear. But yet somehow I didn’t focus on it. Apple itself strongly hinted that the various features in Apple Intelligence wouldn’t all ship at the same time. What they didn’t spell out, but anyone could intuit, was that the more trivial features would ship first, and the more ambitious features later. That’s where the red flags should have been obvious to me.

In broad strokes, there are four stages of “doneness” or “realness” to features announced by any company:

  1. Features that the company’s own product representatives will demo, themselves, in front of the media. Smaller, more personal demonstrations are more credible than on-stage demos. But the stakes for demo fail are higher in an auditorium full of observers.

  2. Features that the company will allow members of the media (or other invited outside observers and experts) to try themselves, for a limited time, under the company’s supervision and guidance. Vision Pro demos were like this at WWDC 2023. A bunch of us got to use to pre-release hardware and in-progress software for 30 minutes. It wasn’t like free range “Do whatever you want” — it was a guided tour. But we were the ones actually using the product. Apple allowed hands-on demos for a handful of media (not me) at Macworld Expo back in 2007 with prototype original iPhones — some of the “apps” were just screenshots, but most of the iPhone actually worked.

  3. Features that are released as beta software for developers, enthusiasts, and the media to use on their own devices, without limitation or supervision.

  4. Features that actually ship to regular users, and hardware that regular users can just go out and buy.

As of today — March 2025 — every feature in Apple Intelligence that has actually shipped was at level 1 back at WWDC. After the keynote, dozens of us in the press were invited to a series of small-group briefings where we got to watch Apple reps demo features like Writing Tools, Photos Clean Up, Genmoji, and more. We got to see predictive code completion in Xcode. Pretty much everything that Apple has actually shipped, as of today, we got to see Apple reps use, live, back at WWDC.

For example, there was a demo involving a draft email message on an iPad, and the Apple rep used Writing Tools to make it “more friendly”. I was in a group of just four or five other members of the media, watching this. As usual, we were encouraged to interrupt with questions. Knowing that LLMs are non-deterministic, I asked whether, as the Apple rep was performing this same demo for each successive group of media members, the “more friendly” result was exactly the same each time. He laughed and said no — that while the results are very similar each time, and he hopes they continue to be (hence the laughing), that there were subtle differences sometimes between different runs of the same demo. As I recall, he even used Undo to go back to the original message text, invoked Writing Tools to make it “more friendly” again, and we could see that a few of the word choices were slightly different. That answered both my explicit question and my implicit one: Writing Tools generates non-deterministic results, and, more importantly, what we were watching really was a live demo.

We didn’t get to try any of the Apple Intelligence features ourselves. There was no Apple Intelligence “hands on”. But we did see a bunch of features demoed, live, by Apple folks. In my above hierarchy of realness, they were all at level 1.

But we didn’t see all aspects of Apple Intelligence demoed. None of the “more personalized Siri” features, the ones that Apple, in its own statement announcing their postponement, described as having “more awareness of your personal context, as well as the ability to take action for you within and across your apps”. Those features encompass three main things:

  • “Personal context” — Knowing details and information about you from a “semantic index”, built from the contents of your email, messages, files, contacts, and more. In theory, eventually, all the information on your device that you wish to share with Siri will be in this semantic index. If you can look it up on your device, Siri will be able to look it up on your device.
  • “Onscreen awareness” — Giving Siri awareness of whatever is displayed on your screen. Apple’s own example usage: “If a friend texts you their new address, you can say ‘Add this address to their contact card,’ and Siri will take care of it.”
  • “In-app actions” — Giving Siri the ability, through the App Intents framework, to do things in and across apps that you can do, the old fashioned way (yourself) in and across apps. Again, here’s Apple’s own example usage:

    You can make a request like “Send the email I drafted to April and Lilly” and Siri knows which email you’re referencing and which app it’s in. And Siri can take actions across apps, so after you ask Siri to enhance a photo for you by saying “Make this photo pop,” you can ask Siri to drop it in a specific note in the Notes app — without lifting a finger.

There were no demonstrations of any of that. Those features were all at level 0 on my hierarchy. That level is called vaporware. They were features Apple said existed, which they claimed would be shipping in the next year, and which they portrayed, to great effect, in the signature “Siri, when is my mom’s flight landing?” segment of the WWDC keynote itself, starting around the 1h:22m mark. Apple was either unwilling or unable to demonstrate those features in action back in June, even with Apple product marketing reps performing the demos from a prepared script using prepared devices.

This shouldn’t have just raised a concern in my head. It should have set off blinding red flashing lights and deafening klaxon alarms.

Even the very engineers working on a project never know exactly how long something is going to take to complete. An outsider observing a scripted demo of incomplete software knows far less (than the engineers) just how much more work it needs. But you can make a rough judgment. And that’s where my aforementioned hierarchy of realness comes into play. Even outsiders can judge how close a public beta (stage 3) feels to readiness. A feature or product that Apple will allow the press to play with, hands-on (stage 2) is further along than a feature or product that Apple is only willing to demonstrate themselves (stage 1).

But a feature or product that Apple is unwilling to demonstrate, at all, is unknowable. Is it mostly working, and close to, but not quite, demonstratable? Is it only kinda sorta working — partially functional, but far from being complete? Fully functional but prone to crashing — or in the case of AI, prone to hallucinations and falsehoods? Or is it complete fiction, just an idea at this point?

What Apple showed regarding the upcoming “personalized Siri” at WWDC was not a demo. It was a concept video. Concept videos are bullshit, and a sign of a company in disarray, if not crisis. The Apple that commissioned the futuristic “Knowledge Navigator” concept video in 1987 was the Apple that was on a course to near-bankruptcy a decade later. Modern Apple — the post-NeXT-reunification Apple of the last quarter century — does not publish concept videos. They only demonstrate actual working products and features.

Until WWDC last year, that is.

My deeply misguided mental framework for “Apple Intelligence” last year at WWDC was, something like this: Some of these features are further along than others, and Apple is showing us those features in action first, and they will surely be the features that ship first over the course of the next year. The other features must be coming to demonstratable status soon. But the mental framework I should have used was more like this: Some of these features are merely table stakes for generative AI in 2024, but others are ambitious, groundbreaking, and, given their access to personal data, potentially dangerous. Apple is only showing us the table-stakes features, and isn’t demonstrating any of the ambitious, groundbreaking, risky features.

It gets worse. Come September, Apple held its annual big event at Apple Park to unveil the iPhone 16 lineup. Apple Intelligence features were highlighted in the announcement. Members of the media from around the world were gathered. That was a new opportunity, three months after WWDC, for Apple to demonstrate — or even better, offer hands-on access to the press to try themselves — the new personalized Siri features. They did not. No demos, at all. But they did promote them, once again, in the event keynote.2

But yet while Apple still wouldn’t demonstrate these features in person, they did commission and broadcast a TV commercial showing these purported features in action, presenting them as a reason to purchase a new iPhone — a commercial they pulled, without comment, from YouTube this week.

Last week’s announcement — “It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year” — was, if you think about it, another opportunity to demonstrate the current state of these features. Rather than simply issue a statement to the media, they could have invited select members of the press to Apple Park, or Apple’s offices in New York, or even just remotely over a WebEx conference call, and demonstrate the current state of these features live, on an actual device. That didn’t happen. If these features exist in any sort of working state at all, no one outside Apple has vouched for their existence, let alone for their quality.

Duke Nukem Intelligence

Why did Apple show these personalized Siri features at WWDC last year, and promise their arrival during the first year of Apple Intelligence? Why, for that matter, do they now claim to “anticipate rolling them out in the coming year” if they still currently do not exist in demonstratable form? (If they do exist today in demonstratable form, they should, you know, demonstrate them.)

I’m not trying to be obtuse here. It’s obvious why some executives at Apple might have hoped they could promote features like these at WWDC last year. Generative AI is the biggest thing to happen in the computer industry since previous breakthroughs this century like mobile (starting with the iPhone, followed by Android), social media (Meta), and cloud computing (Microsoft, Google, and Amazon). Nobody knows where it’s going but wherever it’s heading, it’s going to be big, important, and perhaps lucrative. Wall Street certainly noticed. And prior to WWDC last year, Apple wasn’t in the game. They needed to pitch their AI story. And a story that involved nothing but table-stakes AI features isn’t nearly as compelling a story as one that involves innovative, breakthrough, ambitious personal features.

But while there’s an obvious appeal to Apple pitching the most compelling, most ambitious AI story possible, the only thing that was essential was telling a story that was true. If the truth was that Apple only had features ready to ship in the coming year that were table-stakes compared to the rest of the industry, that’s the story they needed to tell. Put as good a spin on it as possible, but them’s the breaks when you’re late to the game.

The fiasco here is not that Apple is late on AI. It’s also not that they had to announce an embarrassing delay on promised features last week. Those are problems, not fiascos, and problems happen. They’re inevitable. Leaders prove their mettle and create their legacies not by how they deal with successes but by how they deal with — how they acknowledge, understand, adapt, and solve — problems. The fiasco is that Apple pitched a story that wasn’t true, one that some people within the company surely understood wasn’t true, and they set a course based on that.

The Apple of the Jobs exile years — the Sculley / Spindler / Amelio Apple of 1987–1997 — promoted all sorts of amazing concepts that were no more real than the dinosaurs of Jurassic Park, and promised all sorts of hardware and (especially) software that never saw the light of day. Promoting what you hope to be able to someday ship is way easier and more exciting than promoting what you know is actually ready to ship. However close to financial bankruptcy Apple was when Steve Jobs returned as CEO after the NeXT reunification, the company was already completely bankrupt of credibility. Apple today is the most profitable and financially successful company in the history of the world. Everyone notices such success, and the corresponding accumulation of great wealth. Less noticed, but to my mind the more impressive achievement, is that over the last three decades, the company also accumulated an abundant reserve of credibility. When Apple showed a feature, you could bank on that feature being real. When they said something was set to ship in the coming year, it would ship in the coming year. In the worst case, maybe that “year” would have to be stretched to 13 or 14 months. You can stretch the truth and maintain credibility, but you can’t maintain credibility with bullshit. And the “more personalized Siri” features, it turns out, were bullshit.

Keynote by keynote, product by product, feature by feature, year after year after year, Apple went from a company that you couldn’t believe would even remain solvent, to, by far, the most credible company in tech. Apple remains at no risk of financial bankruptcy (and in fact remains the most profitable company in the world). But their credibility is now damaged. Careers will end before Apple might ever return to the level of “if they say it, you can believe it” credibility the company had earned at the start of June 2024.

Damaged is arguably too passive. It was squandered. This didn’t happen to Apple. Decision makers within the company did it.

Who decided these features should go in the WWDC keynote, with a promise they’d arrive in the coming year, when, at the time, they were in such an unfinished state they could not be demoed to the media even in a controlled environment? Three months later, who decided Apple should double down and advertise these features in a TV commercial, and promote them as a selling point of the iPhone 16 lineup — not just any products, but the very crown jewels of the company and the envy of the entire industry — when those features still remained in such an unfinished or perhaps even downright non-functional state that they still could not be demoed to the press? Not just couldn’t be shipped as beta software. Not just couldn’t be used by members of the press in a hands-on experience, but could not even be shown to work by Apple employees on Apple-controlled devices in an Apple-controlled environment? But yet they advertised them in a commercial for the iPhone 16, when it turns out they won’t ship, in the best case scenario, until months after the iPhone 17 lineup is unveiled?

When that whole campaign of commercials appeared, I — along with many other observers — was distracted by the fact that none of the features in Apple Intelligence had yet shipped. It’s highly unusual, and arguably ill-considered, for Apple to advertise any features that haven’t yet shipped. But one of those commercials was not at all like the others. The other commercials featured Apple Intelligence features that were close to shipping. We know today they were close to shipping because they were either in the iOS 18.1 betas already, in September, or would soon appear in developer betas for iOS 18.2 and 18.3. Right now, today, they’ve all actually shipped and are in the hands of iPhone 16 users. But the “Siri, what’s the name of the guy I had a meeting with a couple of months ago at Cafe Grenel?” commercial was entirely based on a feature Apple still has never even demonstrated.

Who said “Sure, let’s promise this” and then “Sure, let’s advertise it”? And who said “Are you crazy, this isn’t ready, this doesn’t work, we can’t promote this now?” And most important, who made the call which side to listen to? Presumably, that person was Tim Cook.

Even with everything Apple overpromised (if not outright lied about) at the WWDC keynote, the initial takeaway from WWDC from the news media was wrongly focused on their partnership with OpenAI. The conventional wisdom coming out of the keynote was that Apple had just announced something called “Apple Intelligence” but it was powered by ChatGPT, when in fact, the story Apple told was that they — Apple — had built an entire system called Apple Intelligence, entirely powered by Apple’s own AI technology, and that it spanned both on-device execution all the way to a new Private Cloud Compute infrastructure they not only owned but are powering with their own custom-designed server hardware based on Apple Silicon chips. And that on top of all that, as a proverbial cherry on top, Apple also was adding an optional integration layer with ChatGPT.

So, yes, given that the news media gave credit for Apple’s own actual announced achievements to OpenAI, Apple surely would have been given even less credit had they not announced the “more personalized Siri” features. It’s easy to imagine someone in the executive ranks arguing “We need to show something that only Apple can do.” But it turns out they announced something Apple couldn’t do. And now they look so out of their depth, so in over their heads, that not only are they years behind the state-of-the-art in AI, but they don’t even know what they can ship or when. Their headline features from nine months ago not only haven’t shipped but still haven’t even been demonstrated, which I, for one, now presume means they can’t be demonstrated because they don’t work.

‘So Why the Fuck Doesn’t It Do That?’

In May 2011, Fortune published an extraordinary look inside Apple by Adam Lashinsky, at what we now know to be the peak, and (alas) end, of the Steve Jobs era. The piece opens thus:

Apple doesn’t often fail, and when it does, it isn’t a pretty sight at 1 Infinite Loop. In the summer of 2008, when Apple launched the first version of its iPhone that worked on third-generation mobile networks, it also debuted MobileMe, an e-mail system that was supposed to provide the seamless synchronization features that corporate users love about their BlackBerry smartphones. MobileMe was a dud. Users complained about lost e-mails, and syncing was spotty at best. Though reviewers gushed over the new iPhone, they panned the MobileMe service.

Steve Jobs doesn’t tolerate duds. Shortly after the launch event, he summoned the MobileMe team, gathering them in the Town Hall auditorium in Building 4 of Apple’s campus, the venue the company uses for intimate product unveilings for journalists. According to a participant in the meeting, Jobs walked in, clad in his trademark black mock turtleneck and blue jeans, clasped his hands together, and asked a simple question:

“Can anyone tell me what MobileMe is supposed to do?” Having received a satisfactory answer, he continued, “So why the fuck doesn’t it do that?”

For the next half-hour Jobs berated the group. “You’ve tarnished Apple’s reputation,” he told them. “You should hate each other for having let each other down.” The public humiliation particularly infuriated Jobs. Walt Mossberg, the influential Wall Street Journal gadget columnist, had panned MobileMe. “Mossberg, our friend, is no longer writing good things about us,” Jobs said. On the spot, Jobs named a new executive to run the group.

Tim Cook should have already held a meeting like that to address and rectify this Siri and Apple Intelligence debacle. If such a meeting hasn’t yet occurred or doesn’t happen soon, then, I fear, that’s all she wrote. The ride is over. When mediocrity, excuses, and bullshit take root, they take over. A culture of excellence, accountability, and integrity cannot abide the acceptance of any of those things, and will quickly collapse upon itself with the acceptance of all three.


  1. Image Playground would make a ton of sense not as a consumer-facing app, but as an example project for developers. Long ago, Apple used to share the source code for TextEdit as an example project for Mac developers. (TextEdit is actually a low-key great application, though. It’s genuinely useful, reliable, and understandable.) Apple shares tons of sample code at WWDC each year. Image Playground would be a great sample project. The silly app icon even looks like something from a WWDC sample project. What Image Playground is not is a credible useful generative AI tool. Yet Apple keeps talking about it — and showing it off in new hardware demonstrations — like it’s something they should be proud of and that anyone might credibly use for real-world work or even personal purposes. Image Playground does exemplify just how state-of-the-art the generative AI features are in Apple Intelligence, but not in the way Apple seems to think. ↩︎

  2. Skip to the 53-minute mark of Apple’s September “It’s Glowtime” event introducing the iPhones 16, and it’s Craig Federighi who says the following:

    “Siri will be able to tap into your personal context to help you in ways that are unique to you. Like pulling up the recommendation for the TV show that your brother sent you last month. And Siri will gain onscreen awareness. So when your friend texts you about a new album, you’ll be able to simply say, ‘Play that.’ And then you’ll be able to take hundreds of new actions in your apps, like updating a friend’s contact card with his new address, or adding a set of photos to a specific album. With Siri’s personal context understanding and action capabilities, you’ll be able to simply say, ‘Send Erica the photos from Saturday’s barbecue’, and Siri will dig up the photos and send them right off.”

    That’s about 40 seconds of keynote time I bet Federighi regrets — and that I suspect he was skeptical about including. It’s telling though, that unlike WWDC, Apple didn’t show those features or spend even a full minute talking about at the iPhone 16 event — despite the fact that, ostensibly, those features should have been three months closer to shipping than they were in June. Federighi’s title is SVP of software, and Apple Intelligence and Siri are “software”, but John Giananndrea (SVP of machine learning and AI strategy) is Federighi’s peer, not subordinate, on the org chart — both report directly to Tim Cook — and is responsible for Siri and Apple Intelligence. Why it was Federighi, not Giananndrea, pitching those features in the iPhone 16 event keynote almost certainly comes down to Federighi’s presentation skills and stage presence, not responsibility for the features themselves. But who’s going on camera to pitch these features and promise their future availability the next time? ↩︎︎

Read the whole story
jheiss
59 days ago
reply
Gruber woke up on the wrong side of the bed this morning.
lukeburrage
59 days ago
Steve Jobs :“You should hate each other for having let each other down.”
lukeburrage
59 days ago
Gruber: "I’ve never been angrier at myself"
Share this story
Delete
2 public comments
martinbaum
58 days ago
reply
He should have used Writing Tools to edit this way down.
satadru
58 days ago
reply
Damn.
New York, NY

Key Codes 2.2.2

1 Comment

Many Tricks:

Key Codes displays information about the characters you type, as you type them into the log window. For each key, you’ll see its Unicode value, key code, and any modifiers.

Unless you’re a developer or script/macro tinkerer, you probably don’t need Key Codes. But when you do need it, it’s a godsend. There’s nothing else like it (anymore). Just a perfect little utility that the clever folks at Many Tricks have made available free of charge for a long time. (Available in the Mac App Store, too.)

Read the whole story
jheiss
83 days ago
reply
"There's nothing else like it"... Huh, sounds exactly like Karabiner-EventViewer that comes with Karabiner-Elements (which is a free tool for remapping keyboard events).
Share this story
Delete

Allison Johnson Reviews the Samsung Galaxy S25 and S25 Plus: ‘Incredibly Iterative’

1 Comment

Allison Johnson, writing at The Verge:

Samsung’s Galaxy S-series is in its software era. Maybe the whole smartphone industry is, too, save for a few phones with hinges (Samsung’s included). But overall, we have exited the hardware-driven innovation cycle and are firmly in the midst of a software-based one. If you want proof, the Galaxy S25 and S25 Plus are a good place to start. [...]

This was all true of the S24 and S24 Plus and the S23 and S23 Plus. I couldn’t give you a good reason why the S25 stands out compared to Samsung’s last three generations of S-series phones. I don’t think Samsung can, either, because its entire sales pitch for the S25 revolves around software and AI capabilities — much of which will almost certainly be ported to previous S-series phones in short order.

When an innovative device form factor settles into maturity, the shift from groundbreaking new hardware dropping every few years to iterative evolution stands out. The heady, go-go years of iPhone-derived touchscreen smartphones (including iPhones themselves) weren’t that long ago. Iterative evolution is, let’s face it, more boring. Or at least it’s not exciting. But it’s inevitable.

The laptops that established the form factor were the PowerBook 100 series, which Apple shipped at the end of 1991. (Before the PowerBooks, laptops generally lacked built-in pointing devices, and were more like briefcases. Apple’s own 1989 Macintosh Portable was more like a suitcase.) Steve Jobs pulled the original MacBook Air out of its manila envelope in January 2008. Everything since then, for laptops, has been iterative.

The stretch from PowerBook 100 series to MacBook Air was about 15 years, give or take. The “smartphones are boring now” complaints really started to hit a few years ago — about 15 years after the 2007 original iPhone. Somewhere in the second decade is when year-over-year changes start to become more and more iterative. But compound interest generates tremendous wealth over time. People wrongly think Apple’s success is forged mostly by spectacular groundbreaking products, but the true key to their success is nonstop iterative improvement. That, as I wrote in 2010, is how Apple actually rolls. You wouldn’t want to use a 2010 MacBook Pro today. There will be small generational leaps and innovations to come (including, perhaps, an “iPhone Air” this year — and future leaps like 2020’s debut of Apple Silicon), but the wheels of technological progress are mostly done wowing us with one-, two-, and maybe even three-year improvements to phones. But trading in a phone older than that should continue to pack a significant amount of wow. So it goes.

Johnson:

Maybe this says more about what passes for a “small” phone in 2025, but the Galaxy S25 is secretly the best small Android phone you can buy in the US. That’s probably not intentional — more like a victory in a war of attrition. Google’s phones since the Pixel 5 only come in big and bigger, and niche small phone options like the Asus Zenfone have dropped out of the race. By merely continuing to exist with a 6.2-inch screen, the smaller S-series model has become the default option if you don’t want a huge Android phone.

Google’s Pixel 9 and 9 Pro have 6.3-inch displays, not too much bigger than the S25, but the trend is clear. All phones are getting bigger. Everyone knows the 5.4-inch iPhone 12 and 13 Minis weren’t hits, sales-wise, but the people who preferred them absolutely loved them. I’ll bet some of you are reading this, nodding your heads, with your aging 12/13 Minis still in your pockets, dreading the day you upgrade — knowing that the longer you wait, the ever-larger the “smallest” new iPhone will be. Maybe this year’s much-rumored thin-is-in “iPhone Air” will take some of the sting out of that.

Read the whole story
jheiss
87 days ago
reply
I bought refurbed 12 and 13 minis for my kids until they became unobtanium. My wife was a mini holdout until the cameras in the Pro phones seduced her.
Share this story
Delete
Next Page of Stories