Black Mirror season 7?
Surely the Friend Reveal Trailer is going to turn out to be a promo for Black Mirror season 7?
Isn’t it?
[Edited to add link to relevant story at The Verge]
Surely the Friend Reveal Trailer is going to turn out to be a promo for Black Mirror season 7?
Isn’t it?
[Edited to add link to relevant story at The Verge]
If we can tear our eyes away from the early days of the Starmer government and cast an eye upon other big electoral events in 2024, we find Rick Perlstein providing some much-needed perspective on Project 2025’s antecedents:
One thing that especially grates on me as a historian is how much of the discourse treats Project 2025 as if it’s some novel thing. I mean, what about Project 1921?
I refer, of course, to the administration of Warren G. Harding, who intoned in his inaugural address of dedicating himself to “the omission of unnecessary interference of Government with business.”
Not so much a radical new idea as it is the latest incarnation of several long-standing right wing objectives, then.
With the Conservative party’s MPs about to see whose turn it is to succeed Rishi Sunak, how long will it be until some Tufton Street think tank brings us a report proclaiming that Project 2029 1 is the way forward for the UK?
Never mind that the UK government is structured very differently from the US government. The important thing, especially if Trump wins, will be to capture the zeitgeist.)↩︎
Go read Doug Muir’s Fungal banking at Crooked Timber. Fascinating stuff:
So in the last couple of decades we’ve discovered that many plants rely on networks of soil fungi to bring them critical trace nutrients. This is a symbiotic relationship: the fungal network can access these nutrients much better than plants can, and in return the plants provide the fungus with other stuff — particularly energy, in the form of glucose sugar, made from photosynthesis.
It turns out this relationship is particularly important for large, long-lived trees. That’s because trees spend years as seedlings, struggling in the shade of their bigger relatives. If they’re going to survive, they’ll need help.
The fungal network gives them that help. The fungus not only provides micronutrients, it actually can pump glucose into young seedlings, compensating for the sunlight that they can’t yet reach. This is no small thing, because the fungus can’t produce glucose for itself! Normally it trades nutrients to trees and takes glucose from them in repayment. So it’s reaching into its own stored reserves to keep the baby seedling alive.
Gosh that’s beautiful isn’t Nature great! Well… yes and no.
Because the fungus isn’t doing this selflessly. The nutrients and glucose aren’t a gift. They’re a loan, and the fungus expects to be repaid […]
Doug Muir has contributed a terrific run of posts about science to Crooked Timber lately. Miles away from the stuff of politics that the site normally publishes, but right up my street. Well worth a read.
Filling a gap that the Internet Movie Database is never going to cater for: Starring The Computer.
Starring the Computer is a website dedicated to the use of computers in film and television. Each appearance is catalogued and rated on its importance (ie. how important it is to the plot), realism (how close its appearance and capabilities are to the real thing) and visibility (how good a look does one get of it). Fictional computers don’t count (unless they are built out of bits of real computer), so no HAL9000 - sorry.
I’m unsurprised that the Sinclair QL only shows up in Micro Men.
[Via MetaFilter]
In the wake of news of the Trump hush money trial’s verdict, Alexandra Petri’s satire on the jury’s deliberations is still well worth a read.
Juror 1: Okay, gentlemen. We can do this a number of ways. We can discuss and then vote. Or we can take a preliminary vote, see where we stand and then discuss. Our result has to be 12 to nothing, either way.
Juror 3: Let’s do a preliminary vote. Maybe we can all go home.
[Eleven hands go up. All heads swivel to Juror 8, sitting at the end of the table with his hand firmly on its surface.]
If you find your view of the article blocked by ads, give thanks for Instapaper if you have it. 1
[Via The Overspill]
Your web browser’s Reader mode may not quite cut it. My experience is that under macOS/Safari the Reader view of this article is silently omitting a couple of crucial lines in the middle of the article. And don’t get me started on how badly Firefox’s Reader view failed me when I tried to see how it coped. Weird. Clearly the Washington Post are very keen to interfere with non-subscribers’ use of their web site. This footnote is long enough - aren’t they all? - so I’ll spare you more details. The point is to point to a funny article. Even more so because it turned out to be wide of the mark.↩︎
If the Daylight DC-1 proves to look as good in real life as it does in their launch video then it’s going to be a seriously tempting proposition.
Give me a touch-responsive monochrome E-Ink screen that refreshes at 60 frames per second and I can live without it displaying those images in colour.1 Granted the operating system is a tweaked version of Android, which is not ideal,2 but it’s also not that big a deal.
Part of me would prefer that the launch of the DC-1 was followed shortly after by the launch of a DC-1 Mini, matching that screen with the immensely more practical form factor of my beloved iPad Minis, but I’ve learned by now not to expect the tablet market to follow my little whims on that front.
[Via Daring Fireball]
SInce I saw this video yesterday I’ve switched on the Greyscale colour filter on my iPad Mini 4 and my MacBook Air just so I could get a sense of how unimportant colour is to much of what I do with my computer over the course of a typical day. The best bit is that even with greyscale mode on on my MacBook Air if I send the content via AirPlay to my Apple TV the greyscale falls away and the video I’m playing is presented in colour as normal. (Edited to add: of course I recognise that iPad Mini’s Greyscale filter isn’t quite doing what an the DC-1’s E-Ink screen is trying. The thing is, I wanted to pick up a general sense of how web content looks when you lose a degree of differentiation between areas of the user interface. Looking pretty good has been the short answer. I’m pretty sure I could cope with an E-Ink tablet screen that looked that good.)↩︎
Given how much of any tablet’s functionality is based around running apps that are essentially a gateway to a web service I suspect that 90-95% of what I do on my iPad Mini today could be done on a DC-1 quite happily. Assuming that the Android Obsidian app runs under Sol:OS, you can push that up to 95%. There would be some friction as I got used to using a different Markdown editor to draft text, or a different RSS client to grab my feeds from Feedbin, but these bumps in the road would recede into the rear-view mirror after a few weeks. The other 5% I mention are tasks that I already split between iPadOS and MacOS because it’s easier to capture content on the iPad Mini and then drop it somewhere that my MacBook Air can see it. I suspect having an Android-based system would make aspects of that sort of process easier still. If I had to substitute using Dropbox or Google Drive or NextCloud as cloud storage instead of iCloud I reckon I could live with that.↩︎
Doug Muir over at Crooked Timber, recounts a recent thought experiment:
A transcript from memory of an evening conversation with my two older sons:
“I heard that Jeff Bezos could run through the streets every day, throwing hundred dollar bills in the air, and he’d still be making money.”
“I wonder if that’s true?”
[…]
Pretty sure it is true, but it’s good to see the workings-out that led my thoughts in that same direction.
As one commenter pointed out:
David in Tokyo 05.07.24 at 8:39 am The freaky thing about JB is that he had to give up half his wealth to divorce his wife to cavort with younger women, but he bounced back to the top almost immediately.
I am reliably informed1 that Angie Wang’s New Yorker cartoon pondering the question Is my toddler a stochastic parrot? earned her a Pulitzer nomination last year.
Seems like a reasonable nomination. Had I been aware of the cartoon’s existence prior to getting word of it from today’s From The Editor’s Desk email, if the New Yorker’s content management system permitted it then I’d have happily thrown them US$1 to read the whole thing.2 As it stands, that was my first look at the New Yorker’s web site for a bit so it was my one free look this month. Had I read something else at the New Yorker earlier this month, I’d have had to hope that someone had arranged for Archive.org to grab a copy.
One day our supposed “AI” successors may deem Angie Wang’s cartoon to be unbearably sentimental about her child’s development, but I can buy that what she’s seeing is not remotely like what LLMs are giving to the world.
Just because Large Language Models (LLMs) are at present a shiny new toy for executives who are willing to throw vast sums at the companies that claim to understand them in the hope of reaping efficiency gains, that doesn’t mean that a decade from now that particular bubble won’t have burst. Perhaps we’ll all look back on how much money was spent on LLMs when public services were starved for funding and feel bad about that.
[Via From The Editor’s Desk (The New Yorker)]
Not that there was any doubt. When I say “reliably informed”, this is in the context of a promotional email from the New Yorker hoping to tempt me into subscribing again, so they are somewhat expected to be bigging themselves up. Last time I gave in to their blandishments to subscribe I ended up conforming to the cliche and ending up with a pile of back issues that I meant to come back to and read later yet never quite did. While I’m happy to link to their content occasionally if I feel the urge, I refuse to get sucked back into a subscription, particularly for a magazine with such a focus on the state of politics and culture in an entirely different nation. (Why yes, I do subscribe to the London Review of Books. Not quite the same thing, but nearer to what I hope for from a subscription.)↩︎
I’m never going to stop being sad that micropayments didn’t take off.↩︎
I’m indebted to The Overspill for sharing the intro to M. G. Siegler’s latest piece on the Apple Vision Pro:
Arguing about the shipment projections for Apple’s Vision Pro is sort of like arguing about how many tickets were sold on the fateful Hindenburg journey.[1] For one thing, we’re going to find out the number one way or another, eventually.[2] For another, we’re sort of overlooking the massive airship exploding in the sky.
[Footnote links omitted from this quotation since this is a paid subscriber post at Spyglass, so we non-subscribers can’t see that content.1]
On the one hand, we’re still a few months on from the release to actual paying customers of the Apple Vision Pro. On the other, rumours that the Vision Pro won’t see a hardware update until late 2026 do suggest that the Apple faithful are going to look back on this experience as an extended beta test of an idea that the world wasn’t ready for yet. The Vision Pro may be seen as a testbed for ideas that will reappear down the line in other products. Like the Xerox Alto was.
[Via The Overspill]
I wish micropayments had turned out to be a thing. I’d have willingly thrown Spyglass US$1 to take a peek at the rest of the linked article, but not US$10 for a month’s subscription or US$100 for an annual subscription.↩︎
Now the embargo on reviews of the Humane Ai Pin has come to an end, Humane might be wishing it hadn’t. Let Cherlynn Low at Engadget stand as an example of what’s out there:
[Low is describing the way you find yourself needing to enter a number in order to validate your identity multiple times a day, on a device that lacks a keypad so has to project numbers onto the palm of your hand. Sounds complicated, but I’m sure Humane’s designers thought users would get used to it over time.]
This gesture is smart in theory but it’s very sensitive. There’s a very small range of usable space since there is only so far your hand can go, so the distance between each digit is fairly small. One wrong move and you’ll accidentally select something you didn’t want and have to go all the way out to delete it.
“Smart in theory” That damning phrase might serve as Humane’s epitaph.
Given how frequently the response to reading about the Ai Pin seems to be some variation of “what does this do that my smartphone can’t do better and faster?”, three thoughts arise:
Of course, almost everyone asking about using a smartphone to accomplish similar goals has not yet had hands-on experience of using an Ai Pin. Perhaps using the device in real life will transform opinions. That’s not the impression I get watching Joanna Stern’s 90 second video review on Twitter/X, but it’s early days yet.
[Via Daring Fireball]
Just as - in a different context - someone paying Apple US$3,499 for a Vision Pro is. The difference is mostly in the prospect that the company will stick with their new platform and produce future versions refining the concept, ideally at a lower price point. The release of a Vision Air or Vision Mini by Apple one day seems plausible. I’m not so sure Humane are going to survive to produce the Ai Pin 2.↩︎
Gary Ings’ article in HTML Review issue 3, a view source web, takes me back:
On my personal websites view source meant being able to adapt and remix ideas. Like drawing a map, elements and pages acted as landmarks in the browser to be navigated between. As a self-initiated learner, being able to view source brought to mind the experience of a slow walk through someone else’s map.
It’s also very nicely presented. Look at it on a decent web browser and be impressed.1
View Source on modern web browsers tends to reveal a whole heap of custom CSS, so it can take some digging to get down to the HTML building blocks of the article you’re reading. I’m glad I taught myself HTML in those simpler times.
Nowadays 99% of what I post here I write in Markdown2 because that’s the core of what I’m trying to communicate here: the words and hyperlinks with the occasional embedded image or video. Nobody will learn very much by using View Source on this content, I’m afraid.
[Via Pixel Envy]
But do all those presentational gimmicks really add that much to the experience of reading the article? Opinions will differ, but it’s good to see what’s possible if you’re trying.↩︎
I do still have WordPress installed on this domain and serving some of my older-but-still-relatively-recent content that was written as HTML, but that’s not the future of this site. If I can ever be bothered to get round to rescuing older posts from various corners of my file system their future will be to be converted to Markdown and I’ll kiss WordPress goodbye. (But then, I’ve been saying that for years and not following through. Don’t hold your breath.)↩︎
The biggest surprise to me reading this piece about Transport For London’s experiment at Willesden Green tube station is that all this extra technology could piggyback on the existing, slightly old and outdated CCTV cameras. 2
[This] was not just about spotting fare evaders. The trial wasn’t a couple of special cameras monitoring the ticket gate-line in the station. It was AI being applied to every camera in the building. And it was about using the cameras to spot dozens of different things that might happen inside the station.
For example, if a passenger falls over on the platform, the AI will spot them on the ground. This will then trigger a notification on the iPads used by station staff, so that they can then run over and help them back up. Or if the AI spots someone standing close to the platform edge, looking like they are planning to jump, it will alert staff to intervene before it is too late.
In total, the system could apparently identify up to 77 different ‘use cases’ — though only eleven were used during trial. This ranges from significant incidents, like fare evasion, crime and anti-social behaviour, all the way down to more trivial matters, like spilled drinks or even discarded newspapers.
So, the system identified 77 different use cases but they only decided to use 11 of them. That graphic would look a lot scarier if the left pane listed all 77 potential use cases.
Given how much the alerts the system generates rely upon station staff reacting to them in order to fix the issues being identified, it’d be nice to imagine that the quantity of incidents revealed might argue for increasing staffing levels.
Why do I have an uneasy feeling that it might not go that way?
[Via LinkMachineGo]
Title shamelessly borrowed from the subtitle of the source post↩︎
If this technology can work with older CCTV it brings that Person of Interest moment that little bit closer to reality. We’d better hope that the Machine wins out over Samaritan.↩︎