Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.
Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and learn more about it. Thanks for reading, and for making the site better!
First time accepted submitter Hallie Siegel writes Last December, an article named 'Playing Atari with Deep Reinforcement Learning' was uploaded to arXiv by employees of a small AI company called DeepMind. Two months later Google bought DeepMind for 500 million euros, and this article is almost the only thing we know about the company. A research team from the Computational Neuroscience Group at University of Tartu's Institute of Computer Science is trying to replicate DeepMind's work and describe its inner workings.
90 comments | about a week ago
New submitter ted_pikul writes: Newly declassified documents reveal that, 30 years ago, the CIA pitted one of its own agents against an artificial intelligence interrogator in an attempt to see whether or not the technology would be useful. The documents, written in 1983, describe a series of experimental tests (PDF) in which the CIA repeatedly interrogated its own agent using a primitive AI called Analiza. The intelligence on display in the transcript is clearly undeveloped, and seems to contain a mixed bag of predetermined threats made to goad interrogation subjects into spilling their secrets as well as open-ended lines of questioning.
65 comments | about two weeks ago
HizookRobotics writes Georgia Tech researchers announced a new way robots can "sense" their surroundings through the use of small ultra-high frequency radio-frequency identification (UHF RFID) tags. Inexpensive self-adhesive tags can be stuck on objects, allowing an RFID-equipped robot to search a room for the correct tag's signal, even when the object is hidden out of sight. Once the tag is detected, the robot knows the object it's trying to find isn't far away. The researchers' methods, summarized over at IEEE: "The robot goes to the spot where it got the hottest signal from the tag it was looking for, zeroing in on it based on the signal strength that its shoulder antennas are picking up: if the right antenna is getting a stronger signal, the robot yaws right, and vice versa."
38 comments | about two weeks ago
An anonymous reader writes: Speech recognition has gotten pretty good over the past several years. it's reliable enough to be ubiquitous in our mobile devices. But now we have an interesting, related dilemma: should we develop algorithms that can lip read? It's a more challenging problem, to be sure. Sounds can be translated directly into words, but deriving meaning out of the movement of a person's face is much more complex. "During speech, the mouth forms between 10 and 14 different shapes, known as visemes. By contrast, speech contains around 50 individual sounds known as phonemes. So a single viseme can represent several different phonemes. And therein lies the problem. A sequence of visemes cannot usually be associated with a unique word or sequence of words. Instead, a sequence of visemes can have several different solutions." Beyond the computational aspect, we also need to decide, as a society, if this is a technology that should exist. The privacy implications extend beyond that of simple voice recognition.
120 comments | about three weeks ago
An anonymous reader writes "IEEE Spectrum contributor Mark Harris obtained a copy of the DMV test Google's autonomous car passed in Nevada in 2012 and associated documents. What has not been revealed until now, is that Google chose the test route; that it set limits on the road and weather conditions that the vehicle could encounter; and that its engineers had to take control of the car twice during the drive.
194 comments | about three weeks ago
An anonymous reader writes The Google Quantum AI Team has announced that they're bringing in a team from the University of California at Santa Barbara to build quantum information processors within the company. "With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture." Google will continue to work with D-Wave, but the UC Santa Barbara group brings its own areas of expertise with superconducting qubit arrays.
72 comments | about a month ago
malachiorion writes Researchers are force-feeding the internet into a system called Robo Brain. The system has absorbed a billion images and 120,000 YouTube videos so far, and aims to digest 10 times that within a year, in order to create machine-readable commands for robots—how to pour coffee, for example. From the article: "The goal is as direct as the project’s name—to create a centralized, always-online brain for robots to tap into. The more Robo Brain learns from the internet, the more direct lessons it can share with connected machines. How do you turn on a toaster? Robo Brain knows, and can share 3D images of the appliance and the relevant components. It can tell a robot what a coffee mug looks like, and how to carry it by the handle without dumping the contents. It can recognize when a human is watching a television by gauging relative positions, and advise against wandering between the two. Robo Brain looks at a chair or a stool, and knows that these are things that people sit on. It’s a system that understands context, and turns complex associations into direct commands for physical robots."
108 comments | about a month ago
Rick Zeman writes Wired has an interesting article on the possibility of selectable ethical choices in robotic autonomous cars. From the article: "The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible. Philosophically, this opens up an interesting debate about the oft-clashing ideas of morality vs. liability." Meanwhile, others are thinking about the potential large scale damage a robot car could do.
Lasrick writes Patrick Lin writes about a recent FBI report that warns of the use of robot cars as terrorist and criminal threats, calling the use of weaponized robot cars "game changing." Lin explores the many ways in which robot cars could be exploited for nefarious purposes, including the fear that they could help terrorist organizations based in the Middle East carry out attacks on US soil. "And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real." But Lin isn't too worried about these threats, and points out that there are far easier ways for terrorists to wreak havoc in the US.
239 comments | about a month and a half ago
KentuckyFC writes Art experts look for influences between great masters by studying the artist's use of space, texture, form, shape, colour and so on. They may also consider the subject matter, brushstrokes, meaning, historical context and myriad other factors. So it's easy to imagine that today's primitive machine vision techniques have little to add. Not so. Using a new technique for classifying objects in images, a team of computer scientists and art experts have compared more than 1700 paintings from over 60 artists dating from the early 15th century to the late 20 the century. They've developed an algorithm that has used these classifications to find many well known influences between artists, such as the well known influence of Pablo Picasso and George Braque on the Austrian symbolist painter Gustav Klimt, the influence of the French romantic Delacroix on the French impressionist Bazille, the Norwegian painter Munch's influence on the German painter Beckmann and Degas' influence on Caillebotte. But the algorithm also discovered connections that art historians have never noticed (judge the comparisons for yourself). In particular, the algorithm points out that Norman Rockwell's Shuffleton's Barber Shop painted in 1950 is remarkably similar to Frederic Bazille's Studio 9 Rue de la Condamine painted 80 years before.
74 comments | about a month and a half ago
Paul Fernhout writes: This explanatory compilation video by CGP Grey called "Humans Need Not Apply" on structural unemployment caused by robotics and AI (and other automation) is like the imagery playing in my mind when I think about the topic based on previous videos and charts I've seen. I saw it first on the econfuture site by Martin Ford, author of The Lights in the Tunnel. It is being discussed on Reddit, and people there have started mentioning a "basic income" as one possible response. While I like the basic income idea, I also collect other approaches in an essay called Beyond A Jobless Recovery: A heterodox perspective on 21st century economics. Beyond a basic income for the exchange economy, those possible approaches include gift economy, subsistence production, planned economy, and more — including many unpleasant alternatives like expanding prisons or fighting wars as we are currently doing.
Marshall Brain's writings like Robotic Nation and Manna have inspired my own work. I made my own video version of the concept around 2010, as a parable called "The Richest Man in the World: A parable about structural unemployment and a basic income." (I also pulled together a lot of links to robot videos in 2009.) It's great to see more informative videos on this topic. CGP Grey's video is awesome in the way he puts it all together.
304 comments | about a month and a half ago
paysonwelch sends this report from Wired on the next generation of consumer AI: Google Now has a huge knowledge graph—you can ask questions like "Where was Abraham Lincoln born?" And it can name the city. You can also say, "What is the population?" of a city and it’ll bring up a chart and answer. But you cannot say, "What is the population of the city where Abraham Lincoln was born?" The system may have the data for both these components, but it has no ability to put them together, either to answer a query or to make a smart suggestion. Like Siri, it can’t do anything that coders haven’t explicitly programmed it to do. Viv breaks through those constraints by generating its own code on the fly, no programmers required. Take a complicated command like "Give me a flight to Dallas with a seat that Shaq could fit in." Viv will parse the sentence and then it will perform its best trick: automatically generating a quick, efficient program to link third-party sources of information together—say, Kayak, SeatGuru, and the NBA media guide—so it can identify available flights with lots of legroom.
161 comments | about a month and a half ago
Recently, you had the chance to ask CIO for the City University of Hong Kong and AI researcher Andy Chun about his system that keeps the Hong Kong subway running and the future of artificial intelligence. Below you'll find his answers to those questions.
33 comments | about 2 months ago
First time accepted submitter catparty (3600549) writes An examination of what we can know about Facebook's new machine learning News Feed algorithm. From the article: "Facebook's current News Feed algorithm might be smarter, but some of its core considerations don't stray too far from the groundwork laid by EdgeRank, though thanks to machine learning, Facebook's current algorithm has a better ear for 'signals from you.' Facebook confirmed to us that the new News Feed ranking algorithm does indeed take 100,000 weighted variables into account to determine what we see. These factors help Facebook display an average 300 posts culled from roughly 1,500 possible posts per day, per user."
130 comments | about 2 months ago
meghan elizabeth (3689911) writes Advancing a career in the U.S. government might soon require an interview with a computer-generated head who wants to know about that time you took ketamine. A recent study by psychologists at the National Center for Credibility Assessment, published in the journal Computers and Human Behavior, asserts that not only would a computer-generated interviewer be less "time consuming, labor intensive, and costly to the Federal Government," people are actually more likely to admit things to the bot. Eliza finds a new job.
102 comments | about 2 months ago
New submitter gthuang88 (3752041) writes In the 1990s, Microsoft was in position to own the software and devices market. Here is Nathan Myhrvold's previously unpublished 1997 memo on expanding Microsoft Research to tackle problems in software testing, operating systems, artificial intelligence, and applications. Those fields would become crucial in the company's competition with Google, Apple, Amazon, and Oracle. But research didn't do enough to make the company broaden its businesses. While Microsoft Research was originally founded to ensure the company's future, the organization only mapped out some possible futures. And now Microsoft is undergoing the biggest restructuring in its history. At least F# and LINQ saw the light of day.
161 comments | about 2 months ago
samzenpus (5) writes "Dr. Andy Chun is the CIO for the City University of Hong Kong, and is instrumental in transforming the school to be one of the most technology-progressive in the region. He serves as an adviser on many government boards including the Digital 21 Strategy Advisory Committee, which oversees Hong Kong's long-term information technology strategies. His research work on the use of Artificial Intelligence has been honored with numerous awards, and his AI system keeps the subway in Hong Kong running and repaired with an amazing 99.9% uptime. Dr. Chun has agreed to give us some of his time in order to answer your questions. As usual, ask as many as you'd like, but please, one question per post."
71 comments | about 2 months ago
Last week you had a chance to ask the Associate Chair of Research in the Computer & Information Science & Engineering Department at the University of Florida, Juan Gilbert, about the Human Centered Computing Lab, accessibility issues in technology, and electronic voting. Below you'll find his answers to your questions.
18 comments | about 3 months ago
meghan elizabeth writes If the Turing Test can be fooled by common trickery, it's time to consider we need a new standard. The Lovelace Test is designed to be more rigorous, testing for true machine cognition. An intelligent computer passes the Lovelace Test only if it originates a "program" that it was not engineered to produce. The new program—it could be an idea, a novel, a piece of music, anything—can't be a hardware fluke. The machine's designers must not be able to explain how their original code led to this new program. In short, to pass the Lovelace Test a computer has to create something original, all by itself.
285 comments | about 3 months ago
Taco Cowboy writes The subway system in Hong Kong has one of the best uptimes: 99.9%, which beats London's tube or NYC's sub hands down. In an average week as many as 10,000 people would be carrying out 2,600 engineering works across the system — from grinding down rough rails to replacing tracks to checking for damages. While human workers might be the ones carrying out the work, the one deciding which task is to be worked on, however, isn't a human being at all. Each and every engineering task to be worked on and the scheduling of all those tasks is being handled by an algorithm. Andy Chun of Hong Kong's City University, who designed the AI system, says, "Before AI, they would have a planning session with experts from five or six different areas. It was pretty chaotic. Now they just reveal the plan on a huge screen." Chun's AI program works with a simulated model of the entire system to find the best schedule for necessary engineering works. From its omniscient view it can see chances to combine work and share resources that no human could. However, in order to provide an added layer of security, the schedule generated by the AI is still subject to human approval — Urgent, unexpected repairs can be added manually, and the system would reschedule less important tasks. It also checks the maintenance it plans for compliance with local regulations. Chun's team encoded into machine readable language 200 rules that the engineers must follow when working at night, such as keeping noise below a certain level in residential areas. The main difference between normal software and Hong Kong's AI is that it contains human knowledge that takes years to acquire through experience, says Chun. "We asked the experts what they consider when making a decision, then formulated that into rules – we basically extracted expertise from different areas about engineering works," he says.
162 comments | about 3 months ago
theodp writes: The AP's announcement that software will write the majority of its earnings reports, argues The Atlantic's Joe Pinsker, doesn't foretell the end of journalism — such reports hardly require humans anyway. Pinsker writes, "While, yes, it's true that algorithms can cram stories about vastly different subjects into the same uncanny monotone — they can cover Little League like Major League Baseball, and World of Warcraft raids like firefights in Iraq — they're really just another handy attempt at sifting through an onslaught of data. Automated Insights' success goes hand-in-hand with the rise of Big Data, and it makes sense that the company's algorithms currently do best when dealing in number-based topics like sports and stocks." So, any chance that Madden-like (video) generated play-by-play technology could one day be applied to live sporting events?
29 comments | about 3 months ago