The Hidden Costs of Automated Thinking | The New YorkerTheory-free advances in pharmaceuticals show us that, in some cases, intellectual debt can be indispensable. Millions of lives have been saved on the basis of interventions that we fundamentally do not understand, and we are the better for it. Few would refuse to take a life-saving drug—or, for that matter, aspirin—simply because no one knows how it works. But the accrual of intellectual debt has downsides. As drugs with unknown mechanisms of action proliferate, the number of tests required to uncover untoward interactions must scale exponentially. (If the principles by which the drugs worked were understood, bad interactions could be predicted in advance.) In practice, therefore, interactions are discovered after new drugs are on the market, contributing to a cycle in which drugs are introduced, then abandoned, with class-action lawsuits in between. In each individual case, accruing the intellectual debt associated with a new drug may be a reasonable idea. But intellectual debts don’t exist in isolation. Answers without theory, found and deployed in different areas, can complicate one another in unpredictable ways.
I predict meInstead of ‘I think therefore I am’ we can say: ‘I predict (myself) therefore I am.’ The specific experience of being you (or me) is nothing more than the brain’s best guess of the causes of self-related sensory signals.
The hard problem of consciousness is a distraction from the real one | Aeon EssaysPredictive processing can also help us understand unusual forms of visual experience, such as the hallucinations that can accompany psychosis or psychedelic trips. The basic idea is that hallucinations occur when the brain pays too little attention to incoming sensory signals, so that perception becomes unusually dominated by the brain’s prior expectations. Different sorts of hallucination – from simple geometric experiences of lines, patterns and textures to rich hallucinatory narratives full of objects and people – can be explained by the brain’s over-eagerness to confirm its predictions at different levels in the cortical hierarchy.
Soviet system for predicting nuclear war had lots of complex inputs with an arbitrary thresholdHis worries about a surprise attack were amplified by “one peculiar mode of intelligence analysis,” a KGB computer model to measure perceived changes in the “correlation of forces” between the superpowers, according to the review. The computer went online in 1979 to warn Soviet leaders when “deterioration of Soviet power might tempt a US first strike,” the review says. The computer was at the heart of the VRYAN system, according to the review, and thousands of pieces of security and economic data were fed into the machine. The computer model assigned a fixed value of 100 to the United States, and Soviet leaders felt they would be safe from a nuclear first strike as long as they were at least at 60 percent of the United States, and ideally at 70 percent. Reports were sent to the ruling Politburo once a month.
Taleb quote on our urge to narrative certaintyBoth the artistic and scientific enterprises are the product of our need to reduce dimensions and inflict some order on things. Think of the world around you, laden with trillions of details. Try to describe it and you will find yourself tempted to weave a thread into what you are saying. A novel, a story , a myth, or a tale, all have the same function: they spare us from the complexity of the world and shield us from its randomness. Myths impart order to the disorder of human perception and the perceived “chaos of human experience.”
Risk takers are scarce (in theory)It is not enough to think of an idea, or even to pursue it desultorily; the entrepreneur’s profit goes to the person who is willing to commit his time and energy to projects that may not succeed. Workers are guaranteed their salaries, lenders are guaranteed their interest, but the entrepreneur has no guarantee: he just collects all the money that is left over. That could be a fortune or it could be less than nothing at all. Most of us don’t want to take this risk. Part of the reason entrepreneurs are well rewarded is because they are scarce.
Rent seeking ennabled by innovation OR knowledgeSchumpeter is correct in asserting that entrepreneurs are fundamentally rent-seeking creatures. But Schumpeter was romantic in thinking that only innovation (i.e., finding new ways of doing things that lower costs or increase quality or create a new good or market) is the source of rent. Much more common is what his fellow Austrian Friedrich Hayek called the “particularized knowledge of time and place.” An entrepreneur makes his living from that knowledge, from a profound local understanding of demand, suppliers, and price.
In short, Google Stars supports favoriting (called starring) URLs, lets you add a title and a note, includes folders, and sharing functionality (both public and private). How much of all this will remain in the final release, is not clear. Naturally, Google could axe Google Stars before it ever sees the light of day, but the fact the company is experimenting with such a service is very interesting. Like all major browsers, Chrome let’s you favorite URLs. Yet Google Stars seems to be much more, and not just because the word “items” is used, which suggests users can favorite specific content on pages, like images and videos.
One start would be to tear down, or at least modify the “Chinese wall” between content and the business side. No other non-monopoly industry lets product creators off the hook on how the business works. Before the journalistic purists burst a fountain pen, consider that there are intermediate points between “holier than holy” and “hopelessly corrupt” when it comes to editorial content.
I started out as an actor, where you seek to understand yourself using the words of great writers and collaborating with other creative people. Then I slid into show business, where you seek only an audience’s approval, whether you deserve it or not.
There is a new vision of journalism – call it the auteur school – in which the business shifts from being organized by institutions to being organized around individual journalists with discrete followings.
At root, the tendency toward boring headlines flows from thinking of a newspaper as a bundle. Customers buy entire newspapers, not individual articles. So by the time the reader has opened a newspaper, he’s already a captive audience. That gives newspaper editors little reason to write flashy, eye-catching headlines. In contrast, people read online news one article at a time. Every article is competing with thousands of other articles for the reader’s attention.
My first design teacher, Philip Burton, used to say, over and over, don't be anecdotal. He would say, when we were presented with a design task in class, most of which were abstract in nature, that we weren't making rebuses. (Paul Rand's famous IBM ad is a design joke and intended to be amusing, not the company's new logo.) What Philip meant about anecdote is that we should not be looking for a literal rendition of what we wanted to represent, but rather a figurative one.
Most news organizations adopt headline conventions that, over time, become institutional clichés. (The New York Times: In Starting With a Prepositional Phrase, a Way to Sound Intelligent. Business Insider: BOOM: Here Is Something Extraordinarily Mundane. Quartz: Why everything you ever believed is a lie, in charts.) Other headlinese words—mull, see, probe, nix—are artifacts of space constraints imposed by narrow newspaper columns. + Space may also have something to do with how Bloomberg headlines got to be so odd. They are limited to 63 characters (45% of a tweet) to ensure the entire headline can fit on a single line of the terminal, which is the primary context in which Bloomberg News articles are read. “Billionaire Dethrones Kings in Beer to Burgers as Batista Model,” the headline on a profile of a Brazilian private-equity baron, ran exactly 63 characters and probably seemed more legible in the terminal than when the article found its way to the web. Much of Bloomberg’s journalism makes less sense as it gets further away from the terminal.
We are currently witnessing a re-architecture of the web, away from pages and destinations, towards completely personalised experiences built on an aggregation of many individual pieces of content. Content being broken down into individual components and re-aggregated is the result of the rise of mobile technologies, billions of screens of all shapes and sizes, and unprecedented access to data from all kinds of sources through APIs and SDKs. This is driving the web away from many pages of content linked together, towards individual pieces of content aggregated together into one experience.
Despite the efforts of companies like Klout and Twitalyzer, the industry that’s appeared around “influence” measurement best resembles the early days of search engine optimization. It’s full of tricks, games, and shady third parties trying to game the system to make a quick buck. Anyone with a few hours to spare can create a Twitter bot that not only appears human, but that, according to the best tools we have right now, is a more influential entity than actual people.
In this sense, social-media users today are where the pioneering video-artists of the 1970s once were. Rosalind Krauss, in a 1976 essay called “Video: The Aesthetics of Narcissism,” condemned the effects of self-documenting art on the art world in general, pointing to the way it shifted emphasis to circulation and feeback: In the last fifteen years that world has been deeply and disasterously affected by its relation to mass-media. That an artist’s work be published, reproduced and disseminated through the media has become, for the generation that has matured in the course of the last decade, virtually the only means of verifying its existence as art. The demand for instant replay in the media-in fact the creation of work that literally does not exist outside of that replay, as is true of conceptual art and its nether side, body art-finds its obvious correlative in an aesthetic mode by which the self is created through the electronic device of feedback.
In an interview with The Paris Review twenty years ago, Don DeLillo mentioned that “lists are a form of cultural hysteria.” From the vantage point of today, you wonder how much anyone—even someone as routinely prescient as DeLillo—could possibly have identified list-based hysteria in 1993. DeLillo’s statement also hints at something crucial about the list as a form: the tension between its gesturing toward order and its acknowledgement of order’s impossibility. The list—or, more specifically, the listicle—extends a promise of the definitive while necessarily revealing that no such promise could ever be fulfilled. It arises out of a desire to impose order on a life, a culture, a society, a difficult matter, a vast and teeming panorama of cat adorability and nineties nostalgia. Umberto Eco put it dramatically: “The list is the origin of culture. It’s part of the history of art and literature. What does culture want? To make infinity comprehensible. It also wants to create order.”
Design is no longer just ‘output’ with the only feedback market share as a measure of fitness. The artifacts it produces now talk back to us by a variety of means and that data feeds back into subsequent design–become concrescent knowledge in parametric systems and thus a kind of genome inherent to the built habitat. Right now our explorations of generative and associative parametric systems are like the X-ray crystallography that first exposed the structure of DNA. And this is transforming the nature of design itself, moving it away from a more ego-centric aesthetics-dominated pursuit it has been in modern times, relying on talent and intuition, to a more social and global design science that uses form and structure to seek knowledge about the world–which, in fact, is what it was before it became ‘professionalized’. Design evolution as scientific method. Through parametric design we are discovering a theory of ‘design science’.