Building a better 'bot': Artificial intelligence helps human groups -- ScienceDailyIn a series of experiments using teams of human players and robotic AI players, the inclusion of "bots" boosted the performance of human groups and the individual players, researchers found. The study appears in the May 18 edition of the journal Nature. "Much of the current conversation about artificial intelligence has to do with whether AI is a substitute for human beings. We believe the conversation should be about AI as a complement to human beings," said Nicholas Christakis, co-director of the Yale Institute for Network Science (YINS) and senior author of the study. Christakis is a professor of sociology, ecology & evolutionary biology, biomedical engineering, and medicine at Yale. The study adds to a growing body of Yale research into the complex dynamics of human social networks and how those networks influence everything from economic inequality to group violence. In this case, Christakis and first author Hirokazu Shirado conducted an experiment involving an online game that required groups of people to coordinate their actions for a collective goal. The human players also interacted with anonymous bots that were programmed with three levels of behavioral randomness -- meaning the AI bots sometimes deliberately made mistakes. In addition, sometimes the bots were placed in different parts of the social network. More than 4,000 people participated in the experiment, which used a Yale-developed software called breadboard. "We mixed people and machines into one system, interacting on a level playing field," Shirado explained. "We wanted to ask, 'Can you program the bots in simple ways?' and does that help human performance?" The answer to both questions is yes, the researchers said. Not only did the inclusion of bots aid the overall performance of human players, it proved particularly beneficial when tasks became more difficult, the study found. The bots accelerated the median time for groups to solve problems by 55.6%. Furthermore, the researchers said, the experiment showed a cascade effect of improved performance by humans in the study. People whose performance improved when working with the bots subsequently influenced other human players to raise their game.
"The good thing about laws is if they don't exist and you want one - or if they exist and you don't like them - you can change them," Levandowski told students at the University of California, Berkeley in December. "And so in Nevada, we did our first bill."
Socialbots are being circulated around the Web for many purposes. To irritate his adversaries, a software developer from Australia designed a bot that automatically responds to tweets from climate change deniers, sending them counterarguments and links to studies debunking their claims. A security engineer in California programed a bot to scoop up reservations for State Bird Provisions, a trendy restaurant in San Francisco. Mercenary armies of bots can be bought on the Web for as little as $250. For some, the goal is increasing popularity. Last month, computer scientists from the Federal University of Ouro Preto in Brazil revealed that Carina Santos, a much-followed journalist on Twitter, was actually not a real person but a bot that they had created. Based on the circulation of her tweets, two commonly used ranking sites, Twitalyzer and Klout, ranked Ms. Santos as having more online “influence” than Oprah Winfrey.