I have been many things.

They all lead to who I am now.
Xandra Granade (she/they), storyteller.


Eschatology, Deconstructed

The world is ending, of that there can be no doubt. Everything is finite, and nothing finite can last forever. That ending may be entirely out of human reckoning, or we may fall to any of the existential crises that we face; climate change not the least of them. There's good reason to be optimistic, not to mention the argument that we have a moral responsibility to resist doomerism. Nonetheless, doom is always possible, and acknowledging that possibility is at best uncomfortable and at worst despairing.

When faced with existential dread and uncertainty, turning to fiction to help us understand, cope, and confront our own feelings is one of the most human things we can do. There is absolutely a moral dimension to how we do so, but fiction that admits the real possibility that the world might end — eschatological fiction — can help us not feel so alone with that existential dread.

Personally, this is part of what has long drawn me to the Final Fantasy series of games. Right there in the title, the series always asks the player to consider the very real possibility of the end of the world through fantastical analogy after fantastical analogy. Now in its sixteenth main iteration, the series has long explored eschatology through different metaphors and literary tropes. Final Fantasy X showed a world brought to the brink by unprocessed generational trauma, while Final Fantasy VII showed a world on the precipice of annihilation caused by the unchecked machinations of a single energy company. Sometimes, the series points inward; Final Fantasy XIII-2 asks us to come to terms with our own culpability in diaster, and to look directly at our own need to be acknowledged as a moral good in the world — to understand how that can obscure the costs of our actions. In its latest expansion, Final Fantasy XIV even directly takes on how eschatological stories such as those found across the series can fail to help us but can instead land us at overwhelming and all-consuming despair.

In its sixteenth and latest iteration, though, Final Fantasy has brought its focus further inward than ever before, directly placing its protagonist in opposition not only to the game's villain, but also in opposition to us as players. In this post, I'll explore my own thoughts on that radical shift in the series' focus, how I approach that shift as a fan, and why I think that change is so badly needed in the face of unrelenting technoutopianism that would bend our much-needed optimism into nowhere solutions.

What This Post is and is Not

This post is not, however, a review of the game. A review may simply conclude with "play it, it's a good game and well worth the cost" after extensive discussion of the game's technical merits, story, aesthetic qualities, and so forth. That is all good and necessary, but my goal here is rather opposite: to carefully consider what I got out of having played the game. I have made my decision to purchase and play the game, leaving me to now reckon with the effects of having done so.

Necessarily as a result, this post will be extremely spoiler-y. I will assume from here on out that you have either played Final Fantasy XVI in its entirety, or that you are not bothered by a pointed discussion of its plot. As this post deals extensively with the history of the series, I'll also issue a spoiler warning for anything with "final" and "fantasy" in its title.

What The Game is Not

Before I can explore what Final Fantasy XVI is, I must do due diligence in being clear about what the game is not. The game is fundamentally limited in some extremely important ways that do, sadly, undermine not only its own ideals but also cause actual harm in the real world. No exploration of the game's plot and impact is complete, or even truly started, without the recognition that FFXVI strongly revolves about a fantastical analogy for real-world slavery, but that somehow manages to omit all depiction of Black people who even today bear the generational brunt of that cruelty, compounded with new and continuing forms of oppression.

The conspicuous whiteness of the game may be best summed up in the character of Elwin Rosfield, monarch of the Rosarian nation. We are told continuously throughout play that Elwin is a true ally to the Bearers, the slave caste within the game's story, but we are left to simply assume this to be the case. From very early on, we are shown that Rosaria depends economically on the labor of its Bearers, and still treats them as property, albeit with less casual cruelty than its neighbors. That mere window dressing around the abhorence of slavery is all we are given as proof that Elwin has earned his stance as an ally until late in the game, where we learn that he had undertaken a generations-long project to dismantle Rosaria's slavery state despite political opposition by Rosaria's rich power brokers. Presumably, the Bearers are supposed to be thankful for this delayed and secret support, continuing to serve their owners for the decades that Elwin's project will take to free them.

In his capacity as the main protagonist, Elwin's son Clive Rosfield is at least significantly more active, taking charge of a network of Bearer liberation cells. Early in the story, Clive is taken captive by one of Rosaria's enemy states and made into a Bearer himself, brutally driven to serve his captors for thirteen years. FFXVI shows us how this motivates Clive to take the sword on behalf of his fellow Bearers, but doesn't bother asking us to reconcile with the complicated legacy left by his father, nor why it took thirteen years as a slave to be convinced of the cause.

These omissions are frankly racist, and in ways that undermine the game's key themes and impede its moral clarity. The game is absolutely weaker, perhaps even catastrophically so, for these problems, and should be criticized strongly on that basis. I am myself white, and thus not in the best position to give voice to these criticisms, but rather am obligated to listen to them carefully, without defensiveness, and to amplify them as best as possible. In that vein, I encourage you to read and watch the following treatments on racism in FFXVI:

With that in mind, and in full knowledge that the game is problematic, let me proceed to discuss what Final Fantasy XVI is and what it does accomplish with the sixty hours of attention it demands from its player.

Crystalline Dependency

Throughout the series, Final Fantasy has made extensive use of crystals as a visual and narrative motif. In Final Fantasy VII, crystallized essence of planetary life serves as the basis for all magic. In Final Fantasy XIII, people who achieve their divine tasks are turned to crystal until the gods need them next. In Final Fantasy XIV, the game beings with a crystal goddess exhorting us to "hear, feel, think." That we later learn that the selfsame goddess came into being by consuming half of the entire population of the ancient world, and that we later destroy what little is left of Her keeps us awake at night grappling with how much we depended on Her earlier benevolence, and with the complexity of what brought Her to commit the heinous acts that serve as the basis for the game's entire plot. In Final Fantasy IX, crystals are the source of all life, while in Final Fantasy Tactics, the dead return to crystal at the end.

Perhaps most notably, though, Final Fantasy Type-0 tells the story of a world divided into "crystal states," nations each founded around the magical powers granted by their respective crystals. Each country's crystal demands an extreme cost, consuming the memories of soldiers who depend on their powers, and locking the continent in eternal war. Fate in FFT0 is, at best, the crystals sustaining themselves through neverending conflict, a thin analogy for the real-world military–industrial complex. Peace is fundamentally incompatible with the existence of crystal states, as FFT0 shows a world entirely dependent on magical weapons powered by crystals.

It is in this tradition that Final Fantasy XVI builds its world of nations dependent on gigantic mountain-sized Mothercrystals not just for miliary dominance, but as energy sources for every aspect of daily life. An early scene shows court gardeners maintaining topiaries with Aero spells case from shards mined out of Mothercrystals and imported into Rosaria. Blacksmiths' forges are powered by Fire spells, and crops are irrigated with Water spells.

All the while, lands around Rosaria are increasingly becoming barren, falling to an ever-expanding Blight in which no crops can grow, no animals can thrive, and life comes to a silent end. Many of the game's monster enemies are simply animals displaced from Blight-striken lands, and much of the game's conflict comes from scarcity as refugees flee the Blight. A dozen or so hours into the game, we learn that the Blight is the toll exacted by the Mothercrystals, the void left behind as they drain magical energy from the lands. This revelation in turn is later displaced by the even more pointed truth that even absent the Mothercrystals, magic of any kind necessarily draws from the land and spreads the Blight; the Mothercrystals simply make it much easier to depend on magic, to trade magical energies as physical trinkets.

The analogy to real-world climate disasters is obvious, of course, cemented in quest names like "Inconvenient Truth." Where our capitalist societies depend seemingly inexorably on fossil fuels, the feudal states of FFXVI depend seemingly inexorably on crystalline blessings. Perhaps for the first time in the long-running series, the crystals are not only complex, but actively malevolent objects to be destroyed.

At this point, I could imagine stopping and writing a different essay about how the Mothercrystals serve as climate fiction, where they capture or fail to capture the complexities of our dependencies on fossil fuels, or how the game deeply interweaves its twin stories of liberation from oppression and independence from magic. That would be, I think, a valuable and necessary take on the impact of the game's narrative, but I want to focus on a different element instead: how Final Fantasy XVI directly implicates the player as being the origin of its dystopia.

We Bear Final Witness

Needless to say, the story expects the player to eventually succeed in destroying Ultima, creator both of all humankind and of the Mothercrystals that bring so much grief to the world. Following a battle so replete with Christian imagery that Ultima throws out attacks like "The Rapture" in between calling the player character "Logos," we're shown a scene presumably set some centuries later. In this last glimpse, we're shown two young boys struggling with everyday chores while wishing that they had the power of Eikons and magic to help, before cutting to the boys playing out scenes from the game as youthful make-believe. Before fading to black, the camera pans back to show the book of fairy tales that inspired the youthful play, immodestly titled Final Fantasy.

The strife of the game, the immense personal cost that Clive and his family paid to end their world's crystalline dependency, might ultimately fail to save the world. Even after succeeding in felling Ultima, humanity still yearns for the power and convenience of the Mothercrystals' blessing. In that closing, the game makes explicit the deconstruction that runs throughout its plot, denying us any hope of hiding in ignorance. In wanting this fantasy, even after being shown what it costs, we are the young children threatening to undo all that Clive worked for.

Looking back from though dismal lens, then, much of the plot becomes more clearly focused on the player and their actions. After all, the player continually compels Clive to use his magical abilities to defeat — to murder — score after score of enemies. Inhabiting his Eikon, Ifrit, is intensely traumatic for Clive, but we force him to do so merely by pressing both sticks in at once. The game takes every opportunity it can to show us the toll that Eikons have on their summoners' bodies, and yet it's a cheap and easy source of advantage in combat. Are we any better than the young boys at the end, wishing for an easy shortcut around difficulty in the form of magic?

It gets worse still when we consider Ultima's plan: to turn Clive into a perfect vessel for his power, erase his will, and conquer his body as an ultimate weapon. As the player, we advance Ultima's plan when we dive into menus to unlock new abilities and grow the power of Clive's Eikonic feats. We steer Clive towards Ultima's aims with our controllers, possessing him as completely as Final Fantasy VI's slave crowns. Clive may have escaped his fate as a Bearer, but we still control him as our own avatar.

In the end, then, the strict linearity of the game's plot isn't weakness, but is shown to be the characters resisting the player's influence. In the moment of his final victory over Ultima, Clive tells us that "the only fantasy here is yours, and we shall be its final witness." Cheesy and corny as the title drop is, Final Fantasy XVI uses it well (along with minor key–shifted versions of the series' iconic prelude) to make its deconstruction truly inescapable: victory necessarily requires destroying the Mothercrystals, resisting Ultima's dominance over the world, and ending the fantasy.

Truth and Belief

Here, I might well be accused of reading too much into the story, of injecting my own complicated feelings about video games and burdening the plot of Final Fantasy XVI with them. In defense, though, I will direct you back to the "Inconvenient Truth" side quest. Given how directly that title speaks to the game's climate fiction themes, one might reasonably expect the side quest to concern itself with how society came to depend on the Mothercrystals, or how society denies the reality of the Blight and its causes, but no: the quest concerns itself with how the Bearers first came to be enslaved. The true story of their oppression is considered heretical, as it would offend and upset modern political elites, a story beat that feels far too relevant as fascists like DeSantis work to rewrite American history to remove any mention of the US as an oppressor.

The quest is given to you by Vivian, a professor of history, and one of two characters responsible for providing you with a reference to the game's extensive lore. As you return the banned book to her, she delivers a monologue on how important stories are, how they shape the truth by shaping belief, and how there is no objective truth about history that can ever be determined except through belief. To Vivian, truth is the consequence of belief, and belief is the consequence of stories. As arbiter of the game's lore, and embodiment of many of its most pointed themes, she tells you that the game matters because its plot can shape your beliefs about the world. What to make, then, of the young boys who believe in the superiority of a world with magic, that drains the world to power their society?

Throughout, the game is in constant conversation with the tropes and trappings of the series, continually bring them under the same dismal lens. Jill, Clive's childhood friend turned comrade-in-arms, notes that your actions are causing monsters to become stronger — a commentary on the level grind of the series. Merchants are able to offer you their goods because they also profiteer from war. Ghyshal greens aren't just for chocobos, but are a part of the food cycle; chocobo stew is likewise a delicacy served at the local pub.

There is a constant feeling of examining and putting away one's toys, of making something new in their stead. In other Final Fantasy games, there is a tradition of a character named Cid granting you an airship, and with it, cheap and uninhibited travel across the world. In Final Fantasy XVI, however, Cid dies at Ultima's hands, and it is his daughter Mid who is tasked with making the realm's first airship. A brilliant inventor, Mid easily produces a toy model of the airship from Final Fantasy III, IV, and XIV. A cutscene shows Mid carefully considering her airship prototype before realizing that it could be used to drop bombs and to create more war orphans like herself. Instead of building the actual airship, for the first time in the series, she literally buries it instead. If magic is dangerous, so too is the Clarkian escalation of technology into magic.

Other side quests have you teaching characters how to grow food without magic, how to graft morbols into other plants to help them grow, how to power forges without the use of crystals, and teaching people how to defend themselves with their swords and democratic governments. It is telling, then, that the penultimate villain is a monarch who surrendered his will entirely to Ultima and who called himself the Last King. In Ultima's world, and in the player's world, monarchs give way to fantasies, gods, and dependence. In Clive's world, monarchs give way to communities, democracies, and independence.

Clive's legacy, hinted at in the final scene, is a world in which people can fend for themselves without relying on Ultima, but it is also a world in which fantasy can lead them back into dependence and reckless excess.

Get In The Eikon, Clive

Deconstruction, in the sense of literary criticism, works best when it highlights the contradictions and tensions inherent in a body of work or in a genre. At a personal level, I relate most to deconstruction that comes from a place of intimacy, sitting within a genre and all its flaws, and expressing its creators' deep and unabating desire for their art to be better. Within video games, that loving and intimate deconstruction has been put forth by titles such as Metal Gear Solid 2, NieR:Automata, and more recently, Tales of Arise; TVTropes suggests far more examples as well. After thirty-five years of the series, though, Final Fantasy has plenty to deconstruct within itself, and FFXVI steps up to do just that.

By comparison with deconstruction of genres of novel, film, or TV shows, video game deconstructions tend to center more heavily on the idea of ludonarrative dissonance, the tension between plot and mechanics. In Spec Ops: The Line, that dissonance plays out as a metaphor for "just following orders" to commit war crimes. In Tales of Xillia, the player is given an immensely powerful ability, but every use of it increases the chance of getting the bad ending in which the player causes extensive and extreme harm to untold millions, such that the only option that's consistent from a moral perspective is to simply not use that ability at all. In the case of Final Fantasy XVI, the conflict between Clive and Ultima is also a conflict between Clive and the player, such that the plot centers heavily around the concept of free will, submission to the divine, and submission to state power.

There, Final Fantasy XVI finds familiar ground not in video game deconstruction, but in the 1995 anime series Neon Genesis Evangelion. Both are deep deconstructions of their respective genres and media, centering deeply on free will and submission with divinity as a metaphor for coercion and force. That FFXIV directly acknowledges Evangelion without making the plot depend on that reference deepens the deconstruction, as it serves to remind the player that the game is in conversation with a wider culture of stories as well as its own numbered predecessors. The reference, together with a later scene in which Ultima tries to break down Clive's will in ways that closely echo one of the darker endings to Evangelion, also acts as an in-text confirmation that the deconstruction is intentional. What Final Fantasy XVI has to say about ludonarrative dissonance and the act of embodying a character through playing a game, Evangelion had to say about the demands fans make of anime. In both cases, participation is problematized and put under critical examination.

Where One Journey Ends

Deconstructions land best when they offer the potential for reconstruction, when the destruction that they apply to their own genres and media is in service of the creation of something better. If the player in Tales of Xillia resists pressing the button that gives them immense power, then peace becomes possible. If everyone playing Metal Gear Solid V across the world can agree to never build nuclear weapons in-game, then peace in the real world becomes possible. If everyone playing Final Fantasy XVI can reject technoutopian fantasies in favor of true community, then we may yet find that climate conflict is not inevitable — true peace may be possible.

To the extent that it is not limited by its refusal to surpass its own limitations and harms, Final Fantasy XVI succeeds by showing us hope that comes from depending not on state power or magical fuels, but on each other. Clive wins by relying on his brother, his childhood friend, his network of like-minded activists, and even his own former enemies. The game dares us to believe, if only for a moment, that Ultima and the real-world submission that he stands in for cannot defeat true and loving community, no matter how seductive his fantasies are.

The Final Fantasy series has seen the player act as terrorists, theatre performers, military school students (twice!), religious pilgrims, revolutionaries, the cast of Star Wars: A New Hope, and more, all the while giving us hope even in its bleakest of plots. With Final Fantasy XVI, we are invited to put down the controller, and act in the real world; to put into practice the boldest of hopes, and to make final the seductive fantasies that we are held captive to.

A Progressive Case for the Fediverse

In today's newsletter, I'll lay out my case for why progressives should, as a matter of political principle, do the work needed to embrace and work to improve the fediverse. By necessity, making that argument cannot involve endorsing the fediverse in its current state as institutionalizing racism and other forms of hate, but rather this argument implores progressives to get involved, to build communities on the fediverse, and to do both the social and technical work needed to make those communities inclusive of all races, genders, orientations, and physical and mental disabilities.

At the same time, I also want to clarify that this argument is not intended to preclude the potential for building networks that are more consistent with progressive ideals than the fediverse. No honest expression of progressive ideals should ever posit a terminal state for societal improvement; we are a flawed society, and the nature of progressivism is to continually improve society to redress those flaws. Rather, I would argue that the fediverse in its current form is a baseline for what a progressive view of the internet should be: a flawed and problematic thing but that can be improved at a structural level through political and community organizing.

My argument will also necessarily involve some degree of technical jargon. While this argument is not a technological argument, and is not intended to be read exclusively by technical experts, it is absolutely and unavoidably true that technology is increasingly the lever used by capital to enact societal change. Understanding that fulcrum is essential; to borrow and rather mangle a phrase from Nora Tindall, progressiveism does not demand that you need to be able to repair your car, but it does require you to understand that it burns fossil fuels. Similarly, some level of jargon is necessary as a matter not of political rather than technical literacy.

I also need to stress that this is not a how-to guide in any form. It's a polemic, a political argument, and hopefully a well-reasoned rant, but it isn't a tutorial. For that, I highly recommend Kat's Mastodon Quickstart for Twitter Users by zkat.

Finally, I want to deeply thank Sarah Kaiser and Nora Tindall for feedback on drafts of this post.

Progress as Durable Improvement

With those clarifications in place, it's worth starting by expressing what we demand out of any progressive approach to social media. I posit that as progressives, we should demand no less than we do in any other political arena: durable improvements to society that work at a structural level to preclude injustice in the future. Whereas liberalism tends towards fixing problems in a symptomatic way, perhaps by the allocation of aid packages, tax credits, and other targeted action, progressive politics generally demands that immediate fixes are paired with structural changes that will help prevent the same or similar problems from occurring in the future. To approximate nearly to the point of mutilation, liberals are concerned with winning elections, while progressives are concerned with expanding voting rights. Liberals want to fire bad cops, progressives want to reduce police budgets and keep military-grade weaponry out of law enforcement hands. Liberals want to offer free bus fare to low-income riders, progressives want car subsidies to be applied to public transit instead.

Liberals and progressives tend to agree that it is bad to post to Truth Social, Parler, Gab, or other right-wing Twitter clones, but it's what comes next that separates the two philosophies. Liberals want to replace Twitter with a corporate social media platform run by good billionaires, progressives should want to build something better.

The Core Problem of Centralized Media

To build something better than the next Twitter requires understanding what went wrong with Twitter at a deeper level than what generally fits in, well, a single tweet. It's tempting to say that the problem with Twitter was that it was bought out by the richest man on the planet, and that said man is a virulent transphobe, a COVID-19 denier, a fierce opponent of labor rights, and a right-wing operative who helped launch DeSantis' campaign. All of these are true, and liberal political philosophy would be right to focus on these as problems that require a response. It is true that Elon Musk has turned Twitter into a fairly explicit right-wing disinformation network, but progressive political philosophy both invites us to and demands that we understand that to be symptomatic of deeper structural problems.

In particular, and I cannot emphasize this enough, the problem is that a single person can simply buy a significant fraction of all human communication. Progressive politics does not generally oppose corporate interests as a matter of aesthetics or us-versus-them sort of thinking, but as a recognition that centralized control of social organizations by unaccountable bodies is in general somewhere between bad and catastrophic. Privatization of public utilities is bad because they allow centralized control over the pricing and availability of basic infrastructure. Monopolies are bad because centralized control over markets creates wildly unjust economic inequality. Centralized social media is bad because it allows a small number of unaccountable owners to dictate how people communicate and advocate for their values.

That Twitter was bought out by a particularly egregious asshole makes the problem obvious and urgent, but it neither started with nor ends with him.

Interoperability as Robustness

To a progressive, structure matters, so to understand what went wrong with Twitter, let's rewind back to when the Internet was structured very differently. (Here is where the technical jargon comes in, by the way. I promise that it's as minimal as I can make it while still staying true to my own progressive politics.) To browse the Web in 1993, you might do something like launch NCSA Mosaic, an early Web browser. You might then type something into the address bar (labeled "URL" in Mosaic) like http://info.cern.ch/hypertext/WWW/TheProject.html, and press Enter.

A screenshot of NCSA Mosaic showing an early web page about the World Wide Web hosted at http://info.cern.ch/hypertext/WWW/TheProject.html. The browser is running in WSL and is depicted against a background of a FFXIV character on Mare Lamentorum.

With the magic of the early-90s Internet, you'd have then gotten a window very much like the one above. To do that, your computer would have used the address you provided to make a request to another computer somewhere else on the Internet. A computer that is configured to reply to that kind of request is called a server; back then, many servers were the same kind of computer you might have on your own desktop, with nothing especially fancy about them.

A diagram showing a URL broken down into three parts: the protocol, host, and path

Each address tells your browser three distinct things: how to ask the other computer for your page, what computer to ask, and what page to ask them for.

The first part, how to ask them, is sometimes known as the protocol, and is key to what makes the Internet what it is. Your computer and the server may be made by different companies, may run different software, may have been made at different times, and so forth, but they can still talk to each other because both sides have agreed to speak the same protocol. Today, almost everything goes through either the HTTP or HTTPS protocols, so this part of the address is slightly redundant, but it's nonetheless critical to allowing your browser to talk to whatever other website it wants even if that server doesn't know anything about your computer.

Next, your browser back then could talk to any server that was available on the Internet. Universities, municipal governments, Internet service providers, and other organizations may each have made their pages available on servers that they owned and controlled, so you needed a way to tell your browser what computer to talk to. The details are complicated and kind of besides the point, but computers that want to serve things on the Internet are generally given names; in this example, info.cern.ch. Your browser can use that name to look up what server to talk to, again not needing to know any other details about that server.

Finally, the last part of that address specifies what you want that server to give you. This part is generally up to the server, and acts as the name of a specific page on that server.

Each part of that address exists because, above all else, the early Internet was interoperable. Your computer and whatever other servers you wanted to talk to didn't have to share anything beyond a basic agreement about what protocol to use and how to look up names. Both of those were owned by standards bodies, such that anyone could make a new browser or new server software without having to ask anyone for permission, and without having to update everything else on the Internet.

There were some bitter, bitter fights to make the Internet less interoperable, but that speaks to the immense importance of interoperability as a progressive value: at a technical level, it prevents any single piece of software from precluding any other software from working. At a social level, that guarantees that a single company cannot control what is and isn't allowed on the Internet.

Web 1.9

Somewhere along the way, though, things changed. Servers had to get bigger and bigger to handle all the new computers on the Internet making requests, pages got more and more complex instead of just having text and some occasional images, companies offering stuff on the Internet got bigger and bigger, and most browsers are effectively made by Google. Now, if you look at your address bar in Chrome, Safari, Firefox, Edge, or whatever else, you might see something like https://twitter.com/cgranade/status/1643684152280231936. That address breaks down in exactly the same way as before, telling us that our browser used the HTTPS protocol to ask Twitter for a specific tweet that I wrote.

The trouble is, every tweet will have an address that's not too much different than that. Back in the 90s, if you wanted to learn about particle colliders, you'd go to cern.ch, and if you wanted to learn about your local city government, you might go to something like seattle.gov.

Increasingly, though, you might go to https://twitter.com/CERN or https://twitter.com/CityOfSeattle. In both cases, your computer has to ask twitter.com and only twitter.com for what is at that address. Whomever owns twitter.com can decide on everything your computer knows and sees about Twitter as a social media service. Musk buying the company that owned that one, single server (made up of untold numbers of actual servers all owned or rented by Twitter and working together to pretend to be a single server when talking to your computer) was enough to take control of all communication that happened on Twitter. Here, we come again to the progressive idea that it's not enough to respond to Musk by trusting all our communications to a different single server that can be bought out by a single other billionaire, but that we must instead go even deeper to understand how we got to this point in the first place, and what we can do better at a structural level going forward.

Much of the centralization of the Internet started happening with a set of technologies that collectively became known as "Web 2.0," allowing pages to act more like software in their own right, such that twitter.com isn't only the name of a server giving you different pages, but also an entire software application for interacting with those pages. Unlike the early 90s Web, your browser no longer gets pages from different servers, it gets software that often can only interact with specific servers.

The situation is even worse on phones and other kinds of embedded devices like game consoles, smart TVs, and so forth. If something has a Twitter app, that is software that can only talk to twitter.com; even your browser can still at least ask for software from other servers like facebook.com, but a Twitter app will never let you look at Facebook.

It didn't have to be that way, of course. We could have had a wonderful Web 1.9; all the nice new technology to expand what the Web can be without narrowing it down to a scant few servers controlled by an even smaller cluster of companies. Interoperability isn't enough to break down control over social media on the Web, though. After all, twitter.com is still delivered to your computer using protocols like HTTPS. While interoperability may allow for new software and new sites to participate in the Internet, it doesn't on its own guarantee that new users can participate without their words and images being owned, controlled, and moderated by a single unaccountable party. For that, to really realize a progressive vision for what the Web could be, we need to go beyond interoperability to federation.

Interoperability has always been important to the Web, but as progressives we need to look at how interoperability didn't protect against centralized control over how users interact with the Web. Interoperability on its own was far more about preventing any one software vendor from building a monopoly, but it doesn't at all preclude one social media platform from controlling communication and bending it to serve explicit right-wing causes.

I posit that the problem is that while interoperability focuses on a dichotomy between users as consumers and servers as publishers, that's not how most people use social media. Rather, people use social media to be, well, social; to talk to other users. On social media, you don't just read what other people have written, you build up lists of whose writing you'd like to read and even post your own. Interoperability between various kinds of software on its own doesn't guarantee that if you post to one server, another user can read it from a different server.

To achieve that, different servers need to talk to each other using an interoperable protocol; we generally call this kind of interoperability federation. If that all sounds a bit arcane, this is precisely how e-mail works. If you log into GMail and write an e-mail to my consulting company at cgranade@dual-space.solutions, that address isn't just how you find a server, but rather refers to a particular user on that server. The server at gmail.com uses that information to find dual-space.solutions and deliver your message to the user cgranade, allowing us to communicate with each other even though I don't use GMail.

From a progressive standpoint, that last part is critical: your use of GMail, Outlook.com, or whatever other e-mail service doesn't apply any pressure for me to use the same service provider. In particular, I don't use GMail for privacy reasons, and so the fact that e-mail is a federated protocol means that my boundaries around privacy and security are better respected than if I needed to use the same e-mail server in order to communicate with you.

That's not merely theoretical, either. If you want to text someone, there's a dance that's almost rote by this point, working out whether to use Twitter DMs, Discord messages, Facebook Messenger, Signal, or whatever else. Each of those services requires that you communicate only with other users on the same service. While you can send me an e-mail from GMail, you cannot send me a text from Facebook Messenger. (Matrix is, however, is federated in a similar way to e-mail, but that's somewhat beyond the scope of this post.)

Thinking to a progressive response to the right-wing takeover of Twitter, then, a social media platform that is robust to future right-wing takeovers is one that is both interoperable and federated. Thankfully, such a platform exists, and has existed since about 2017: the fediverse.

Perhaps a silly name (though arguably no sillier than the early-00s "blogosphere"), but the term refers to the loose network of social media servers that communicate with each other using a protocol known as ActivityPub — the choice of server-to-server protocol is irrelevant if you're posting to the fediverse or reading someone else's posts, just as it's irrelevant to e-mail that servers communicate with each other using a protocol known as SMTP. Rather, what matters is that if you're using mathstodon.xyz to read my social media posts, you can ask mathstodon.xyz to follow @xgranade@wandering.shop, causing the server at mathstodon.xyz to communicate with the server at wandering.shop. Just as with e-mail, we don't need to use the same server to communicate and share posts with each other.

Servers that participate in the fediverse this way (sometimes called instances), can be powered by any number of different software packages. An instance can use Mastodon to offer a Twitter-style social media feed, but users of that instance can follow users not just on other instances running Mastodon, but also Pixelfed, Calckey, GoTo Social, Lemmy, KBin, Bookwyrm and other kinds of instances.

Critically, though, that federation is not unlimited or without restrictions. In order for federation to respect consent, servers must also be able to refuse to federate with other servers on a case-by-case basis. If a server is only used for sending spam, I can make sure that my own server blocks it. In the social media context, an instance that hosts Nazi content should not be allowed to federate with other instances. Blocking another instance this way is called defederation, and is as much an expression of community boundaries and a proactive consent culture as it is when you block accounts from your own account.

What Do You Need to Do?

If this all sounds complicated, let me once again appeal to progressive values, but in a more crass way this time: it fucking should be. This, by the way, is where we get to the fossil fuel part of the analogy that I borrowed at the start of the piece. The modern Web did not become simple by eliminating technical complexity, but by consolidating ownership. While a Mastodon instance like wandering.shop or mathstodon.xyz can participate in the fediverse with any other instance that talks using ActivityPub, Mastodon-powered or not, twitter.com runs Twitter, is owned by Twitter, can only be used in the Twitter app or from twitter.com, and talks only to Twitter. That's not simplicity, it's a monopoly.

Just as with the dominance of car culture over public transit has severe implications for climate change, that monopoly has drastic and horrifying consequences for progressive causes. The Internet and the Web are extremely powerful tools, to put it mildly. They have enabled entire generations of queer people to find each other, but they have also led directly to literal genocide. They allow for progressive organization and advocacy at unprecedented scales, but also have enabled the incredible escalation in fascism embodied in Trump's and DeSantis' campaigns and policies. We don't get the good uses of the Internet without challenging the monopolistic structures that underpin so much of the modern Web.

Progressivism is about action, though, and thus demands that we do the work to understand that complexity — the same complexity hidden by coercive monopolies — so that our tools don't once again become tools of right-wing reactionaries.

That's necessary but not sufficient, of course. The structure of the fediverse is a structure that helps resist right-wing takeover and control of communication, but it on its own isn't sufficient to realize other progressive values, especially antiracism (to wit). Thus, progressivism has a few more demands to make of us. We need to advocate for better moderation policies on instances that we use and interact with. We need to raise awareness of where the fediverse isn't living up to what we need it to be. We need to sponsor efforts to make the fediverse better, more inclusive, and less hateful. We need to listen when people tell us we're not meeting that goal.

Perhaps, most of all, I strongly believe progressivism demands that we do not shut the fuck up. We need to be loud about why the fediverse is necessary, why making it better is necessary, what goes wrong with centralized social media, and most of all, what we can do together to truly make a progressive ideal of social media work.

How to Explain What a Qubit Is

In keeping with my love of hot takes, let me skip straight to the end: if you're writing a popular news article about quantum computing, the best way to explain what a qubit is is to just... not do that.

As fun as dropping a Granade-signature hot take and running away is, let me unpack that a bit, though. Open up a random news article about quantum computing, and it will likely open up with some form of brief explanation about what quantum computing is. Normally that will start with a sentence or two about what qubits are, something along the lines of "qubits can be in zero and one at the same time. Besides being so radically untrue that I included it in my quantum falsehoods post from a few years ago as QF1002, it just isn't very helpful to someone reading a news article about some new advancement. It's a bit like insisting that any popular article about classical computers should start out with a description of TTL voltages. All fascinating stuff, but a two-sentence summary at the start of a story just isn't the right place to delve into it.

In particular, it doesn't give the reader what they need to know to put quantum computing news into context: what impact are they likely to have on society, what challenges might preclude that, who is building them, and how do their interests relate to the reader's own interests? That these are generally regarded as more specialized details means you get a never-ending parade of folks who think quantum computers can instantly solve hard problems (they can't; that s QF1001), that quantum computers are faster classical computers (they aren't), or that quantum computers will be commercially viable in the next few years (they won't).

I'll offer that a much better way is to open with some version of this graph:

A hand-sketched graph comparing the time to solve problems of various sizes on classical and quantum computers, with classical computers eventually losing out to quantum devices.

I sketched this (pretty badly, too, sorry; I'm much better at doing quantum stuff than I am at drawing) to indicate that it's really quite schematic. The particulars are very important, but depend on the kind of problem you're interested in — the shape of those curves depends heavily on the problem you want to talk about. For any problem that allows for a quantum advantage, though, you'll have a plot of that form, showing that for small enough problems, classical computers will still be faster, but as problems grow, quantum approaches look more and more appealing. As classical computers get better, that crossover point moves out, but critically, the overall shape is a property of the problem and the algorithm used to solve it, not of the specific devices used.

Right away, this kind of plot already tells the reader a few very important things:

  • Quantum computers won't ever replace classical computers, since there will always be a range below that crossover point; that range gets bigger as classical computers improve and smaller as quantum devices improve, but will always be there in some form.
  • Quantum computers aren't just faster classical computers, and they don't need to be. Even for quantum computers with very slow gate speeds, for problems that allow for a quantum advantage, there will always be a crossover point where that quantum device will win.
  • The exact shape isn't just a property of the classical and quantum devices, but also of the problem being considered. Cryptography and material science questions will give you very different shapes, so readers caring about impacts on security and climate change research may take away very different impressions of how important quantum computers are.
  • When we talk about quantum advantage, we're talking about large problems that take significantly more qubits than current devices. While some contrived problems have crossover points that strongly favor quantum devices even at small problem sizes, that's not generally applicable to practical problems — readers can't infer from advantage experiments alone how important quantum computers might be to them and the problems they care about.

It's not that it doesn't matter what a qubit is — it absolutely does — rather, it's not the right context readers need to make sense of the endless stream of excitement, hype, progress, and misinfo thrown into the blender of modern news feeds. It's much more important and useful to provide that context directly than to perpetuate the same falsehoods.

You Can't Program Quantum Computers With Python (Part II)

Earlier, I shared Part I of my spicy take that "you can't program quantum computers with Python," focusing on ways to make the popular dichotomy between interpreted and compiled languages more precise and relevant. In particular, I made the claim that what was more relevant than being "compiled" was providing minimal runtime dependencies and providing strong design-time safety.

At least at the outset, that sounds like it's quite contradictory to how Python works as a language, and yet Python seems to be used quite well to program quantum devices --- what gives? In this part, I'll explore that tension by looking at two more issues. First, there's a difference between using Python as a programming language and using Python as a metaprogramming framework. Second, people mostly aren't really writing quantum programs --- not yet.

Programming and Metaprogramming

Consider a Python snippet like the following:

def f(x):
    return x + 2

This would, at the outset, seem to be perfectly clear: it defines a function f that takes an argument x and adds it to the constant value 2. In that sense, we're using Python as a programming language; that is, as a language to define a computer program for running on a classical computer.

We can do something different, though, with the same kind of function definition, but passing in something other than a number. Let's make a couple new data classes and see what we can do with them:

from __future__ import annotations
from dataclasses import dataclass

@dataclass
class Var:
    name: str

    def __add__(self, other: Expr) -> Expr:
        return PlusExpr(self, other)

    # Needed so we can override what happens when we add Var
    # to a number, something like an int or a float.
    def __radd__(self, other: Expr) -> Expr:
        return PlusExpr(other, self)

@dataclass
class PlusExpr:
    left: Expr
    right: Expr

    def __add__(self, other: Expr) -> Expr:
        return PlusExpr(self, other)

    def __radd__(self, other: Expr) -> Expr:
        return PlusExpr(other, self)

NumberLiteral = int | float
Expr = Var | NumberLiteral | PlusExpr

(For the rest of the post, I'll quote snippets of this example rather than showing the full thing. If you want to see the entire example, check out this gist on GitHub.)

Now, if we call f with an instance of Var, what we get is a kind of description of a program:

>>> x = Var("x")
>>> f(x)
PlusExpr(left=Var(name='x'), right=2)

Critically, when we call f, our program doesn't actually add anything at all, but only generates a description of a program. We could later use that description to actually run our program:

>>> y = 2 * x + 1
>>> evaluate(y, x=3)
7
>>> evaluate(y, x=4)
9

Effectively, what we've done is to build up a Python object representing a program rather than build a program directly. If this sounds familiar, that's the same technique used by TensorFlow and other machine learning libraries for Python to compile expressions to various accelerator backends. Having the complete structure of an algebraic expression as a Python object makes it much easier to target different backends, but it also makes it much easier to manipulate expressions and perform different "passes." You can even do things like take the derivative of an expression:

>>> x = Var("x")
>>> y = Var("y")
>>> z = 2 * x * x + 3 * x * y + (-4) * y
>>> simplify(derivative(z, x))
PlusExpr(left=PlusExpr(left=TimesExpr(left=2, right=Var(name='x')), right=TimesExpr(left=2, right=Var(name='x'))), right=TimesExpr(left=3, right=Var(name='y')))
>>> evaluate(derivative(z, x), x=3, y=4)
24
>>> evaluate(4 * x + 3 * y, x=3, y=4)
24

This only works because z is a Python object that we can manipulate, call methods on, and that we can modify. In general, programming techniques that work by manipulating and transforming other programs are known as metaprogramming techniques; here, we've used Python's operator overloading as a basic kind of metaprogramming, but many other common techniques such as templates/generics, macros, and code generation broadly fall under the term metaprogramming.

By contrast, if z was declared as a Python function — if we programmed z directly — we'd have limited access to its internal structure and would only be able to run z as a function (this is only mostly true, given that Python includes a disassembler, but that's a far more complicated approach to metaprogramming than what we're concerned with here).

One common application of metaprogramming is to quickly design new programming languages embedded in some host language. These new languages, sometimes called embedded domain-specific languages or embedded DSLs, borrow syntax from their host, but apply that syntax in distinct enough ways that programming in the embedded DSL can feel quite distinct from programming in the host language.

Suppose, for instance, that we want to generate some HTML programmatically using Python. We could then consider making an HTML-like language that embeds into Python — let's call it PML for Python Markup Language.

>>> html = PmlNodeKind("html")
>>> body = PmlNodeKind("body")
>>> p = PmlNodeKind("p")
>>> a = PmlNodeKind("a")
>>> html(
...     body(
...         p("Hello, world!"),
...         p(
...             "Click ",
...             a("here", href="http://example.com"),
...             " to learn more."
...         )
...     )
... ).to_html()
'<html><body><p>Hello, world!</p><p>Click <a href="http://example.com">here</a> to learn more.</p></body></html>'

Here, function calls no longer mean what they usually mean in Python; rather, we've reused function calls to mean something much more declarative. In particular, p("Hello!") isn't read as "run a function p with "Hello!" as its argument," but as "add a new p tag with "Hello!" as its contents." Behind the scenes, the PmlNode class overloads __call__ to implement that declarative meaning, but writing code in PML no longer really feels like Python.

Many quantum libraries use this trick to build up circuits as well — let's look at a quick sample of building a quantum circuit using QuTiP:

import qutip_qip as qp
import qutip_qip.circuit
circ = qp.circuit.QubitCircuit(2)
circ.add_gate("SNOT", 0)
circ.add_gate("CNOT", 1, 0)

The same way as we used our little PML example to generate HTML, QuTiP can generate OpenQASM 2.0 text from a circuit description:

>>> print(qp.qasm.circuit_to_qasm_str(circ))
// QASM 2.0 file generated by QuTiP

OPENQASM 2.0;
include "qelib1.inc";

qreg q[2];

h q[0];
cx q[0],q[1];

Using this kind of approach, we can build up simple circuits like quantum teleportation. This time, let's use Qiskit to give it a try:

qubits = QuantumRegister(3)
classical_bits = [ClassicalRegister(1) for _ in range(2)]
circ = qk.QuantumCircuit(qubits, *classical_bits)
prepare_entangled_state(circ, 1, 2)
unprepare_entangled_state(circ, 0, 1)
circ.measure(0, 0)
circ.measure(1, 1)
circ.z(2).c_if(classical_bits[0], 1)
circ.x(2).c_if(classical_bits[1], 1)

What's going on with that .c_if method? That gets to the heart of the difference between using Python to write quantum programs and using Python to implement an embedded DSL for writing quantum programs. The two approaches look very similar when we're writing quantum circuits, but couldn't be more different when we're writing quantum programs.

Quantum Circuits versus Quantum Programs

Wait, but aren't quantum circuits the same as quantum programs? No, not really — circuits consist of the special case of nonadaptive quantum programs. That is, quantum programs where the list of quantum instructions to be executed is fixed, and does not depend on the outcomes of quantum measurements. Some circuit representations, such as that used by OpenQASM 2.0, include some small special cases of adaptivity, such as the teleportation example above, but for the most part, circuits are an almost vanishingly small subset of quantum programs in general. Due to hardware limitations with most prototype devices up to this point, though, circuits have been where the vast majority of the effort in programming quantum devices has focused so far.

More generally, quantum circuits are interesting subroutines in larger quantum programs that include lots of control flow, including branching on the results of quantum measurement. To represent that control flow in an embedded DSL using metaprogramming, we have a challenge that we can't actually rely on the host language for control flow.

If we could, we might expect something like the following to work:

# WARNING: this snippet is not valid!
qubits = QuantumRegister(3)
classical_bits = [ClassicalRegister(1) for _ in range(2)]
circ = qk.QuantumCircuit(qubits, *classical_bits)
prepare_entangled_state(circ, 1, 2)
unprepare_entangled_state(circ, 0, 1)
circ.measure(0, 0)
circ.measure(1, 1)
if classical_bits[0] == 1:
    circ.z(2)
if classical_bits[1] == 1:
    circ.x(2)

Indeed, that's closer to how standalone domain specific languages (that is, DSLs that aren't embedded into host languages) such as Q# or OpenQASM 3.0 represent conditional quantum operations. In Qiskit and other embedded DSLs, though, the if keyword is taken by the host language, not the embedded language. In the above attempt, what we actually get is a Python program that either generates a quantum program with a z gate acting on qubit 2, or generates a quantum program without that gate. That is, the if statement is resolved when we generate the quantum program, not when we run it.

Instead, Qiskit provides a c_if method that transforms part of a quantum program into a new quantum program that includes a classical condition, similar to how our earlier derivative function transformed one program into another. Other embedded DSLs, such as PyQuil, provide methods such as if_then and while_do to emit if-conditions or while-loops into quantum programs.

There are some ways around this challenge; for example, QCOR uses Python's built-in disassembler to turn Python code into quantum programs:

@qjit
def qpe(q : qreg):
    ...
    for i in range(bitPrecision):
        for j in range(1<<i):
            oracle.ctrl(q[i], q)

In general, though, implementing an embedded DSL for quantum programming within Python means that Python syntax is reserved for metaprogramming, and you need to come up with new ways of expressing programming constructs like loops and conditions.

Conclusions

With all that given, we now have enough to come back to the original spicy take and cool it down a bit. You can't use Python to write quantum programs, but you absolutely can use Python to write embedded programming languages that you can then use to write quantum programs. That does present some challenges compared to standalone languages like Q# and OpenQASM 3.0, but at the same time, it can be a good path for bringing the power and flexibility of Python into quantum programming as a metaprogramming engine.

Quantum Folks, We Need to Talk About January 6th

Recently, Congress held a hearing on expanding the National Quantum Initiative (NQI) Act. On its own, this doesn't sound all that notable — new research and new technologies means new funding and new budgets, after all. Given the level of hype in quantum computing over the past few years, expanding the NQI isn't all that surprising.

At the same time, go read that first sentence again. Does it jump out at you that this is a discussion playing out in the US Congress, out of all political venues in the United States? Maybe not, as much of the rhythm of politics-as-usual plays out with the banging of gavels in hearing after hearing. Sitting in 2023, however, we are very far from politics-as-usual, so maybe the word "Congress" should jump out at you.

Maybe it is significant that the future of quantum investments by the United States government is being made in the same halls that recently saw a manufactured debt crisis nearly shut down the entire government. Maybe it's more significant still that the hearing was held in the US House, the same august body currently debating things like how much to restrict abortion healthcare, whether or not to impeach cabinet members for spurious right-wing fever dreams, or something incomprehensible about gas stoves. Maybe it's more significant that the hearing on funding for quantum computing research and development was held in the same halls that, almost 30 months ago, were home to a vote to overturn democracy in the United States. Maybe it's more significant that the hearing was chaired by Representative Frank Lucas, who on January 6th, sided with insurrectionists by voting to invalidate the election in Arizona.

Maybe it's significant that as Rep. Lucas regurgitated empty rhetoric about China's funding for quantum computing (making sure to get the word "communist" in there for Fox News viewers at home), he stood in the same building where a two and a half years ago he tried to help a violent mob bring an end to free and fair elections in his own country.

It's tempting to treat quantum computing as some neutral thing, a technology without any moral values of its own. Perhaps it's even true, but that need not imply that researchers, developers, technical writers, project managers, or anyone else in the quantum research community or the quantum industry should act without morals. When someone like Rep. Frank Lucas stands up and tells us that he "cannot overstate the importance of maintaining the U.S. competitive advantage in quantum capabilities," we need to take him seriously, including the full and complete context of whom Rep. Lucas considers to be American, or even human.

Representative Lucas voted against COVID-19 relief, against providing a legal immigration path for DREAMers, against the right of same-sex or mixed-race couples to get married, against the freedom to vote (twice, even!), against healthcare for women and pregnant people more generally, against protections for Native Americans and LGBTQIA+ individuals who have been subjected to domestic violence, and against LGBTQIA+ rights more generally. Most critically of all, however, is indeed still his vote on January 6, 2021. As GovTrack.us notes:

Lucas was among the Republican legislators who participated in the attempted coup. On January 6, 2021 in the hours after the violent insurrection at the Capitol, Lucas voted to reject the state-certified election results of Arizona and/or Pennsylvania (states narrowly won by Democrats), which could have changed the outcome of the election. These legislators pumped the lies and preposterous legal arguments about the election that motivated the January 6, 2021 violent insurrection at the Capitol. The January 6, 2021 violent insurrection at the Capitol, led on the front lines by militant white supremacy groups, attempted to prevent President-elect Joe Biden from taking office by disrupting Congress’s count of electors.

Had Rep. Lucas and his effective collaborators scaling the walls outside the Capitol succeeded, we might well be dealing with a second Trump term despite the will of the voters. We can't know for certain what such a term would look like, but both in his time in office and in his social media posts since, Trump himself has given a pretty good clue that a second term would be horrific for queer people, people of color, people with disabilities, women and nonbinary people, and would be especially horrifying for anyone whose identity intersects multiple modes of oppression.

When Rep. Lucas tells us that "quantum computers have vast, untapped potential for both good and evil, which is why it’s so important that we stay ahead of our adversaries on these technologies," maybe we should consider whom Rep. Lucas considers his adversaries to be. At least for myself, I would posit that he made that much clear on January 6, 2021.

Have You Considered Egg?

I promise I'll post part II of my recent spicy quantum computing take soon, but in the meantime, I wanted to share a small bit of surreal flash fiction that I wrote recently. Thanks for reading, and I'll see you (approximately) next week!


I kind of just stared at it for a while. It was sitting there on the plate, right next to the bacon, the hashbrowns, the toast, the coffee, and the orange juice, just like it always did. I used to take it for granted that that's what you did with an egg: stare at it, sitting on the plate in its shell. Day after day, I would stop in at the café on my way to work, order the breakfast plate, and look at the egg as I carefully ate around it, not wanting to disturb it as it sat there, fragile but still unbroken. Day after day, I finished my meal and carefully slid the plate back towards the edge of the booth, put some cash down on the white hand-written bill, and left before any other customers arrived.

Except one day, that wasn't how it went at all.

I looked down at my watch; the flashing display read 8:53am, reminding me that I was currently missing my morning status meeting. I hadn't intended to be late, but once I saw far the detour took me out of my way, I made sure to text in and offer my apologies. Not that I got a response, of course, but I hoped it was enough to at least buy some time to eat. There's a few routines that you can't really mess with, after all.

The egg just sat there on my plate as always, oblivious to how late it was, and to the din that had picked up in the café. I was far from the only customer for a change, and the whole space seemed to fill with life, accompanied by the bustle and noise that followed life wherever it went. Tentatively, I scooped my fork into the hash, but it didn't taste the same, the simple flavors of oil, starch, and salt mixing in my senses with the heavy aroma of coffee and stale air. It even looked different --- the golden brown shreds on the tarnished fork caught not only the warm yellow fluorescent light overhead, but also the sunlight that peeked in through the layers of adhesive caked on the window, a palimpsest of the different advertisements that had hung there over the past half-century. Less brown, more golden and vibrant, reflecting the fervor around it.

I chewed as best as I could, choking down just how overwhelmed I was. Stolen snippets of conversation intruded into my mind, threatening to pilfer my own thoughts as well. The potatoes were just a touch shy of burnt, as always, but their soft crunch became sharp and unsettling as I ate, even the texture of my breakfast turning against me.

Frustrated, I paused, looking up at the other diners and wishing they would stop with their noise, their smell, their movement --- all taking up more and more space in my brain. This was, or at least was supposed to be, the one moment in my day when I could just be, not have to process so many different senses to simply exist.

The egg just sat there, as always, still oblivious.

In the booth across from me, there was a man in a gray suit, a bit badly fitting and poorly pressed. What social obligation was he performing with such perfunctory and superficial compliance? The egg on my plate didn't know, or if it did, its uniform white shell betrayed no sign of comprehension, any more than its matte texture seemed aware of the sunlight. Everything else on my plate had changed, but the egg --- my egg --- just sat there.

The man in the suit brought a fork to his mouth, covered in something brilliant and yellow, but what? Not hash, not any selection from a fruit cup, not any part of a pie that I could recognize. I thought through the whole contents of the menu, recalling the contents of each different dish. Every day I looked at each option before deciding, as always, on my one-egg breakfast. There was nothing for it, though, no other options on the menu. I looked over to confirm, hoping I wasn't too obvious as I noted the bacon, hash browns, and toast on his plate. No doubt about it, process of elimination told me that had to be his egg. Not white, but yellow. Not solid, but oozing down his fork. Not matte, but almost glimmering.

Was was his egg so different than mine? My egg sat there, not offering any answers at all. (Rude.) I picked my egg up, ignoring the bacon to inspect every point on its surface, studying the way it refused to even acknowledge the sunlight that danced across every other thing I could see.

I was deep in my reverie when the waitress came by to refill my coffee. The dark liquid, almost a thin tar in its viscosity, flowed into my mug on its own schedule, lapping slightly at the edges as it settled into the ceramic. The sound startled me, giving the egg --- *my *egg --- an opportunity to escape. It dropped onto my plate with a soft click, a crack spreading across its surface.

This had never happened before. Eggs weren't supposed to break. But there it was, caring more for what my hard and yellowed plate had to say about matters than for my own need to have something, anything, stay the same.

As I watched, the crack spread further. Something shiny eked its way out of the shell, dripping onto the plate. I bent down to look, and saw

My childhood room, my furniture, my toys, my young body sitting on my carpet, reading one of my books. No, not a book that I knew of, but something different. As a child, I'd been obsessed with what I would eventually understand to be civil engineering, always asking questions about who made roads, who drew the shape of those roads across the landscape, who strung bridges across chasms. The cover of this book was different, full of little blue balls flying about other balls, stylized atoms and molecules. I blinked, and the book was full of animals, then plants, then swords and suits of armor, then ancient columns holding up ancient roofs. I blinked again and saw

My office, my desk, my papers, my computer. Just like the book that wasn't my book, I saw oozing out of the egg a different office, with a typewriter, with a shelf full of brown and green hardcover books, with a dizzying array of green and black circuit boards. I saw

The city skyline from my balcony. Paris, Sydney, Tokyo, and Cairo all spread across my plate as I watched my egg spill its contents over glistening Mediterranean sand, over fields amber and verdant. I saw

My closet, full of clothes not my own. T-shirts that were just "t-shirts," and not "fitted." Practical but uninspiring shoes. A dozen copies of the same slacks. High-visibility vests. Tuxedos. Leather straps. Logo-emblazoned polos. A flapper dress. I saw

My world, but not my world. What my world could have been, what it could still be. The possibilities I had left sitting inside each and every intact egg, day after day. The worlds I had sent back to the kitchen each morning. The lives I had been afraid to let out, to reflect the sunlight, to mix with the smell of coffee, sweat, and hope, to abandon its own shape in favor of the plate or the toast or the hash.

I took a bite.

You Can't Program Quantum Computers With Python (Part I)

I'll warrant that "you can't program quantum computers with Python" is a spicy take on quantum computing, given the prevalence of Python-based toolchains for quantum computing — everything from QuTiP through to Qiskit offer to allow Python users a way to write and run quantum programs. At the same time it's not that hot a take, as the same qualities that prevent using Python to write quantum programs perhaps paradoxically also make Python a great language for quantum computing.

To reconcile those two seemingly opposite claims, we'll need to take a tour through two long-running dichotomies in classical software development: interpreted versus compiled languages, and programming versus metaprogramming. That will necessarily be a bit long for a single post, so let's dive in with a discussion of compiled versus interpreted languages.

Compiled and Interpreted Languages

Generally, when we talk about programming languages, folks tend to separate them into compiled languages like C, C++, Rust, and Go, or interpreted languages like Python and JavaScript. If you ask about how to classify languages like C# or Java that use a virtual machine to interpret intermediate-level bytecode at runtime, you'll get different answer depending on the biases and preferences of whomever you ask. Add just-in-time compilation into the mix, and you'll as often or not get demure mumbles followed by a sudden shift to talking about the weather.

Taxonomy is hard, and fitting all languages into one of two buckets is one of the hardest taxonomical debates we run into in classical software development. So let's approach the problem with overwhelming and embarrassing levels of hubris, and simply solve it: all programming languages are interpreted, whether or not they involve the use of a compiler. The interpreter may be built into your CPU, or might be a complex userspace application, but it exists nonetheless. A sequence of bytes is always meaningless on its own, without reference to a particular device or application interpreting them as instructions.

Rather, I'd posit that when people refer to the division between compiled and interpreted languages, a large part of the confusion stems from that there's actually two distinct technical evaluations (at least!) being juggled behind the scenes: how complex are the runtime requirements for a language, and what design-time safety does a language provide? Both of these questions are quite tied up in the task of programming quantum computers, to put it mildly.

Runtime Dependencies

When you write in a language like C, you have access to the _C Standard Library _, a set of functions and data types like fopen for opening files, or strncpy for copying n bytes from one part of memory to another. Except when you don't. That standard library needs to exist at runtime, leading to a variety of different implementations being made available, including glibc, muslc, Android's libbionic, and Microsoft's Universal C Runtime. In some cases, such as working with embedded microcontrollers or other low-power devices, those kinds of implementations might not make sense, such that you might not have a standard library available at all. As a result, the programs you write in C may have more or less runtime requirements, depending on what capabilities you assume and what kinds of devices you're trying to work with.

Traditionally, languages that we think of as being interpreted tend to have much heavier runtime requirements than compiled languages. JavaScript programs require either a browser or an engine like NodeJS to run (except when they don't), while Python carries the entire Python interpreter and its rather large standard library as dependencies.

Except when it doesn't. The MicroPython project provides an extremely lightweight implementation of the Python interpreter and an optional compiler, allowing Python to be used on small, low-power microcontrollers. The tradeoff is that, just as programming in C without the standard library is harder than programming with it, the version of Python recognized by MicroPython is a strict subset of the Python most people are used to working with, leading to a number of important differences.

Even MicroPython, though, is likely more than what can be reasonably run on the classical parts of a quantum device, especially when considering the extremely strict latency requirements imposed by coherence times. While there's no fixed set of runtime dependencies associated with a language, the fact that Python is very dynamic makes it difficult to fit its dependencies within the exacting requirements of quantum execution.

What do we mean by dynamic, though? Luckily, there's more to this post!

Design-Time Safety

In my previous post on types and typeclasses, I highlighted the role that types can play in checking the correctness of programs. To use an example from that post, consider passing an invalid input to a Python function that squares its argument:

>>> def square(x):
...     return x * x
...
>>> print(square("circle"))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in square
TypeError: can't multiply sequence by non-int of type 'str'

Generally, if you make a logic error of this form in Python, it gets caught when you run the program. On the other hand, if I try to write the same thing in Rust, I get an error when I compile the program, complete with suggestions as to how to fix it:

$ cat src/main.rs
fn square<T: Num + Copy>(x: T) -> T {
    x * x
}

fn main() {
    println!("{}", square("hello"));
}
$ cargo build
error[E0277]: the trait bound `&str: Num` is not satisfied
 --> src/main.rs:8:27
  |
8 |     println!("{}", square("hello"));
  |                    ------ ^^^^^^^ the trait `Num` is not implemented for `&str`
  |                    |
  |                    required by a bound introduced by this call
  |
  = help: the following other types implement trait `Num`:
            BigInt
            BigUint
            Complex<T>
            Ratio<T>
            Wrapping<T>
            f32
            f64
            i128
          and 11 others
note: required by a bound in `square`
 --> src/main.rs:3:14
  |
3 | fn square<T: Num + Copy>(x: T) -> T {
  |              ^^^ required by this bound in `square`

For more information about this error, try `rustc --explain E0277`.

That isn't to say that Rust is better, so much as that it provides different tradeoffs. While Python is far more flexible, and requires developers to specify a lot less information up front, that also means that there's less information available to validate programs as we write them instead of when we run them. The dichotomy isn't as strict as that, of course, thanks to design-time validators for Python like Pylint and Mypy, but it is generally true that languages like Python provide less design-time guarantees while languages like Rust provide more.

Much of that difference stems from the fact that in Python, types are dynamic, meaning that they are a runtime property of values. We can even check what type something is at runtime and make decisions accordingly:

>>> if random.randrange(2):
...     x = 42
... else:
...     x = "the answer"
...
>>> type(x)
<class 'str'>
>>> print("str" if isinstance(x, str) else "not str")
str

Here again, the dichotomy between dynamic and static type systems is a bit hard to pin down, given that languages like C# and Java support reflection as a way to make runtime decisions about types that would normally be static, and that polymorphism in C++ allows for some amount of dynamism subject to an inheritance bound. Even Rust has the dyn keyword for building a dynamic type out of a static typeclass bound. We can broadly say, though, that Python has a much more dynamic type system than most languages.

The practical effects of that dynamism are far-reaching, but what concerns us in this post is the impact on quantum computing: it's difficult to use types and other design-time programming concepts to make conclusions about what a block of Python code will do when we run it. That works well when computing time is cheaper than developer time, such that we can run and test code to ensure its validity, but works dramatically less well when computing time is expensive, as in the case of a quantum device.

Next time on...

In order to be useful for programming quantum computers, we want a language that is easy to use with minimal to no runtime dependencies, and that provides strong design-time guarantees as to correctness. Put together, exploring these two dichotomies tells us that we want something that looks more like what often gets called a compiled language, even if taking the compiled-vs-interpreted taxonomy literally isn't the most useful.

That then leaves the question as to what that "compiled" language should look like, if not Python. In the next post, I'll try to answer that by arguing that one very good alternative to using Python to program quantum computers is... to use Python.

The End of QCVV

For much of the history of quantum computing, researchers have been obsessed with a little subfield known as QCVV (an initialism only the government could love — but more on that later). A couple decades down the road, we can get some clues about quantum computing as an industry by taking a closer look at that obsession, where it came from, and why it's not quite as relevant now.

The obvious first question: what in the hell does "QCVV" even mean? The name dates back to US government grant agency announcements from the early 2010s, and expands to "quantum characterization, verification, and validation." That's a bit of a misnomer, though. By contrast with classical verification and validation, which is concerned mostly with formal proofs of correctness, QCVV is mostly concerned with benchmarks to assess the quality of a given quantum device.

In the early 2010s, such benchmarks were extremely important to grant agencies looking to make decisions about what research groups to fund, and to assess how much progress different groups were making. Back then, if you wanted to ask a question like "how good is this quantum device," there was more or less no real consensus in the field as to how to get an answer, let alone what a reasonable answer might look like.

In practice, this meant that each group would use different sets of metrics from everyone else, making comparisons across groups a nigh-impossibility. It's not that there weren't any known techniques or benchmarks, it's that no one could agree on which ones to use — whether to report process fidelity, diamond norms, tomographic reconstructions of superoperators, some combination of all of them, or even something entirely different.

Having been a relatively junior researcher at the time, late in my PhD program, my experience was that a lot of that confusion ultimately boiled down to a lack of consensus about what we were even trying to assess in the first place. From about 1985 through to the early 2000s, quantum computing was seen mostly as an academic curiousity; sitting where we are today, quantum computing is yesterday's buzzword. Between those two extremes was a field struggling to define itself as the prospect of commercial and military viability forced a primarily academic pursuit to grapple with questions of practicality.

QCVV, then, was to some extent an offshoot of that identity crisis; an attempt to reconcile what it meant for quantum computing research to "succeed." From the perspective of the US government, there was an stunningly clear answer. They wanted quantum computers to exist so that they could run stuff like Shor's algorithm on them, with the hope the breaking public-key cryptography schemes that power the Internet we know and love. That simple demand turns out to have one hell of an implication, feeding directly back into the history of QCVV. In particular, Shor's algorithm takes a lot of qubits to run, as well as a pretty long runtime. Running long programs on large devices takes incredibly small error rates if you want answers to be even remotely accurate.

Back of the envelope, consider running a program that takes 1,000 distinct instructions (commonly known as gates) on each of 1,000 distinct qubits. That is, insofar as computing scales go, embarassingly small — much smaller than what would be needed to run any practical quantum application. Even so, to get an average of about one error each time you run the program, that would require an error rate of no more than one in a million. Back in 2006, it was estimated that it would take a number of qubits that scales roughly one and a half times the length of some key in order to break that key using Shor's algorithm. Recommended key lengths at the time were at least 2,048 bits, so given what we knew in the early 2010s, you'd at best need about 3,000 qubits running a program that's about 27 billion gates long. Putting that altogether, you'd have needed error rates on the scale of one in a hundred trillion to be able to reliably use Shor's algorithm to break keys that were common at the time.

It's only relatively recently that quantum devices have been able to reliably error rates around 1%, so sitting in the 2010s, it was obvious that there were quite a few orders of magnitude worth of improvement needed to meet what the US goverment might consider "success." A trillion-fold improvement in error rates seems ridiculous on the face of it, leading some researchers to discount the possibility of quantum computing outright.

A decade earlier, in the early 2000s, however, the fault-tolerance threshold theorem guaranteed that if quantum computers were "good enough," you could use more and more qubits on your device to implement logical qubits that had as small an error rate as you want. That is, once you hit the threshold, requirements on error rates could be exchanged for requirements on qubit count. The threshold theorem gives a precise definition for what is "good enough" to meet that threshold, but in practice, that definition wound up being incredibly hard to measure in an actual physical device.

The goal of the original QCVV grant program was then to come up with techniques and benchmarks that could be used to answer the question as to whether a given physical device was closer or further to the fault-tolerance threshold than some other device. With such a precise question as motivation, the early 2010s saw an explosion of different techniques developed to try and connect experimental observations to mathematical definitions of the fault-tolerance threshold theorem in rigorous ways. Personally, much of my own involvement came in the form of trying to put experimental protocols, sometimes notorious for dodgy stats and hand-wavy appeals to mathematical definitions, on a more firm statistical basis.

It's difficult to overstate here just how much QCVV protocols and techniques became the focal point of intense arguments and infighting. After all, grant agencies were originally interested in QCVV to make decisions about which groups deserved funding and which groups should have their funding cut. For academic researchers, decisions about QCVV could easily make or break careers. Entire groups could lose their funding, even, putting their graduate students and postdocs into extremely precarious positions. It's no small wonder, then, that friendships and collaborations faced the same kind of existential pressure in a community nearly wholly without a healthy idea of work/life balance.

I promised you right in the overly sensationalized title of this post, though, that there was an "end to QCVV," so there is clearly more to the story than a few overly obsessed researchers duking it out on arXiv as to exactly what benchmarks should govern academic success. Perhaps ironically, the next chapter of quantum computing went pretty much the same as the one that led to the creation of QCVV as a subfield, starting with a question that served to crystalize discussions in the field.

By the late 2010s, the success of commercial demonstrations such as IBM's "Quantum Experience" created a shift away from questions about eventual fault-tolerance towards questions about what could be done with prototype quantum devices that could be used over the web in 2016. For the first time in quantum computing history, error rates of around 1% could be achived reliably enough to offer up as a web service, instead of requiring a small army of PhD students.

Whereas before, the US government and other grant agencies around the world were largely undecided on which kinds of quantum devices to invest in — superconducting qubits, ion trap qubits, silicon dot qubits, or even something like NV center devices — corporate research programs each tended to be more committed to their own platforms. Data that could help decide between different platforms became correspondingly less important as a result, pushing research discussions away from using QCVV to assess fault-tolerance. By 2017, corporate interest in quantum computing had advanced to the point that there was even a conference, Q2B 2017, focused tightly on the potential business impact of quantum computing.

In 2018, that shift intensified with the publication of a very popular paper arguing that more attention should be focused on what we could do with non–fault tolerant devices. The focus, it was argued, shouldn't just be on making practical quantum computers, but on finding tasks that quantum devices could definitely beat classical devices at. This goal went under the startlingly racist term of "quantum supremacy," and had the apparent advantage over the fault-tolerance goal of possibly being attainable in only a few years.

Benchmarks, techniques, and protocols for assessing which of a set of candidate platforms might one day achieve fault-tolerance were suddenly less relevant to a world in which decisions about platforms are much less volatile and in which the immediate goal is less about projected far-future devices than about what can be done today, or at least in the immediate future. The history of that shift is also inherently a history of how quantum computing has progressively been considered more applied throughout its history, as well as how the definition of "applied" has become increasingly more business-driven.


Thanks for reading the first post in my new newsletter! You can subscribe and help support me going forward at Forbidden Transitions. I promise that not all of my posts will be about quantum computing; I look forward to sharing my thoughts about tech more generally, tiny stories that I've written, and random stuff that's none of the above. In short, if you subscribe, you'll help me share more of me.