AI

Quest for AI Leadership Pushes Microsoft Further Into Chip Development (bloomberg.com) 31

From a Bloomberg report: Tech companies are keen to bring cool artificial intelligence features to phones and augmented reality goggles -- the ability to show mechanics how to fix an engine, say, or tell tourists what they are seeing and hearing in their own language. But there's one big challenge: how to manage the vast quantities of data that make such feats possible without making the devices too slow or draining the battery in minutes and wrecking the user experience. Microsoft says it has the answer with a chip design for its HoloLens goggles -- an extra AI processor that analyzes what the user sees and hears right there on the device rather than wasting precious microseconds sending the data back to the cloud. The new processor, a version of the company's existing Holographic Processing Unit, is being unveiled at an event in Honolulu, Hawaii, today. The chip is under development and will be included in the next version of HoloLens; the company didn't provide a date. This is one of the few times Microsoft is playing all roles (except manufacturing) in developing a new processor. The company says this is the first chip of its kind designed for a mobile device. Bringing chipmaking in-house is increasingly in vogue as companies conclude that off-the-shelf processors aren't capable of fully unleashing the potential of AI. Apple is testing iPhone prototypes that include a chip designed to process AI, a person familiar with the work said in May. Google is on the second version of its own AI chips. To persuade people to buy the next generation of gadgets -- phones, VR headsets, even cars -- the experience will have to be lightning fast and seamless.
AI

Mozilla's New Open Source Voice-Recognition Project Wants Your Voice (mashable.com) 55

An anonymous reader quotes Mashable: Mozilla is building a massive repository of voice recordings for the voice apps of the future -- and it wants you to add yours to the collection. The organization behind the Firefox browser is launching Common Voice, a project to crowdsource audio samples from the public. The goal is to collect about 10,000 hours of audio in various accents and make it publicly available for everyone... Mozilla hopes to hand over the public dataset to independent developers so they can harness the crowdsourced audio to build the next generation of voice-powered apps and speech-to-text programs... You can also help train the speech-to-text capabilities by validating the recordings already submitted to the project. Just listen to a short clip, and report back if text on the screen matches what you heard... Mozilla says it aims is to expand the tech beyond just a standard voice recognition experience, including multiple accents, demographics and eventually languages for more accessible programs. Past open source voice-recognition projects have included Sphinx 4 and VoxForge, but unfortunately most of today's systems are still "locked up behind proprietary code at various companies, such as Amazon, Apple, and Microsoft."
China

Beijing Wants AI To Be Made In China By 2030 (nytimes.com) 169

Reader cdreimer writes: According to a report on The New York Times (may be paywalled, alternative story here): "If Beijing has its way, the future of artificial intelligence will be made in China. The country laid out a development plan on Thursday to become the world leader in A.I. by 2030, aiming to surpass its rivals technologically and build a domestic industry worth almost $150 billion. Released by the State Council, the policy is a statement of intent from the top rungs of China's government: The world's second-largest economy will be investing heavily to ensure its companies, government and military leap to the front of the pack in a technology many think will one day form the basis of computing. The plan comes with China preparing a multibillion-dollar national investment initiative to support "moonshot" projects, start-ups and academic research in A.I., according to two professors who consulted with the government about the effort."
AI

IBM's AI Can Predict Schizophrenia With 74 Percent Accuracy By Looking at the Brain's Blood Flow (engadget.com) 93

Andrew Tarantola reports via Engadget: Schizophrenia is not a particularly common mental health disorder in America, affecting just 1.2 percent of the population or around 3.2 million people, but its effects can be debilitating. However, pioneering research conducted by IBM and the University of Alberta could soon help doctors diagnose the onset of the disease and the severity of its symptoms using a simple MRI scan and a neural network built to look at blood flow within the brain. The research team first trained its neural network on a 95-member dataset of anonymized fMRI images from the Function Biomedical Informatics Research Network which included scans of both patients with schizophrenia and a healthy control group. These images illustrated the flow of blood through various parts of the brain as the patients completed a simple audio-based exercise. From this data, the neural network cobbled together a predictive model of the likelihood that a patient suffered from schizophrenia based on the blood flow. It was able to accurately discern between the control group and those with schizophrenia 74 percent of the time. What's more, the model managed to also predict the severity of symptoms once they set in. The study has been published in the journal Nature.
Intel

Intel Launches Movidius Neural Compute Stick: 'Deep Learning and AI' On a $79 USB Stick (anandtech.com) 59

Nate Oh, writing for AnandTech: Today Intel subsidiary Movidius is launching their Neural Compute Stick (NCS), a version of which was showcased earlier this year at CES 2017. The Movidius NCS adds to Intel's deep learning and AI development portfolio, building off of Movidius' April 2016 launch of the Fathom NCS and Intel's later acquisition of Movidius itself in September 2016. As Intel states, the Movidius NCS is "the world's first self-contained AI accelerator in a USB format," and is designed to allow host devices to process deep neural networks natively -- or in other words, at the edge. In turn, this provides developers and researchers with a low power and low cost method to develop and optimize various offline AI applications. Movidius's NCS is powered by their Myriad 2 vision processing unit (VPU), and, according to the company, can reach over 100 GFLOPs of performance within an nominal 1W of power consumption. Under the hood, the Movidius NCS works by translating a standard, trained Caffe-based convolutional neural network (CNN) into an embedded neural network that then runs on the VPU. In production workloads, the NCS can be used as a discrete accelerator for speeding up or offloading neural network tasks. Otherwise for development workloads, the company offers several developer-centric features, including layer-by-layer neural networks metrics to allow developers to analyze and optimize performance and power, and validation scripts to allow developers to compare the output of the NCS against the original PC model in order to ensure the accuracy of the NCS's model. According to Gary Brown, VP of Marketing at Movidius, this 'Acceleration mode' is one of several features that differentiate the Movidius NCS from the Fathom NCS. The Movidius NCS also comes with a new "Multi-Stick mode" that allows multiple sticks in one host to work in conjunction in offloading work from the CPU. For multiple stick configurations, Movidius claims that they have confirmed linear performance increases up to 4 sticks in lab tests, and are currently validating 6 and 8 stick configurations. Importantly, the company believes that there is no theoretical maximum, and they expect that they can achieve similar linear behavior for more devices. Though ultimately scalability will depend at least somewhat with the neural network itself, and developers trying to use the feature will want to play around with it to determine how well they can reasonably scale. As for the technical specifications, the Movidius Neural Compute Stick features a 4Gb LPDDR3 on-chip memory, and a USB 3.0 Type A interface.
AI

Researchers Have Figured Out How To Fake News Video With AI (qz.com) 85

An anonymous reader quotes a report from Quartz: A team of computer scientists at the University of Washington have used artificial intelligence to render visually convincing videos of Barack Obama saying things he's said before, but in a totally new context. In a paper published this month, the researchers explained their methodology: Using a neural network trained on 17 hours of footage of the former U.S. president's weekly addresses, they were able to generate mouth shapes from arbitrary audio clips of Obama's voice. The shapes were then textured to photorealistic quality and overlaid onto Obama's face in a different "target" video. Finally, the researchers retimed the target video to move Obama's body naturally to the rhythm of the new audio track. In their paper, the researchers pointed to several practical applications of being able to generate high quality video from audio, including helping hearing-impaired people lip-read audio during a phone call or creating realistic digital characters in the film and gaming industries. But the more disturbing consequence of such a technology is its potential to proliferate video-based fake news. Though the researchers used only real audio for the study, they were able to skip and reorder Obama's sentences seamlessly and even use audio from an Obama impersonator to achieve near-perfect results. The rapid advancement of voice-synthesis software also provides easy, off-the-shelf solutions for compelling, falsified audio. You can view the demo here: "Synthesizing Obama: Learning Lib Sync from Audio"
AI

Many Firms Are 'AI Washing' Claims of Intelligent Products (axios.com) 93

Software companies are seeking to exploit the current artificial intelligence craze by "AI washing" -- exaggerating the role of AI in their products, according to a new report by Gartner, the research firm. From a report: Gartner, which tracks commercial manias through a tool it calls the Hype Cycle, compares what is currently going on in AI with a prior surge in environmental over-statement -- "greenwashing, in which companies exaggerate the environmental-friendliness of their products or practices for business benefit." The bottom line: More than 1,000 vendors say their products employ AI, but many are "applying the AI label a little too indiscriminately," Gartner says in its report. Kriti Sharma, who runs the AI team at Sage, tells Axios that a lot of companies are seeking to solve problems using AI that would be better done by humans. And what is often called AI "is just automation that you are doing," she said.
AI

Michigan Will Build 25 Self-Driving Trolleys In 2017 (observer.com) 100

French trolley-maker Navya announced its first manufacturing facility in North America. The company will build a 20,000 square foot facility for the construction of its self-driving trolley, the Arma. "It aims to construct 25 vehicles there this year," reports Observer. "It has 45 vehicles deployed around the world already. These robots have a max speed of about 27 miles per hour, but typically travel more like 12 miles per hour (the speed of a typical bike ride). Each one can transport about 15 people." From the report: The plant will be built in Saline, Michigan, a suburban town just south of Ann Arbor with a population of less than 9,000. The Michigan Economic Development Corporation estimates that the plant will support 50 new jobs. "As the greater Ann Arbor area continues to establish itself as a hub for autonomous vehicle development, we feel it's the perfect location for us. Strong government and community support for mobility initiatives combined with an excellent talent pool provide the ideal environment for our expansion in North America," Navya CEO Christophe Sapet said in a press release. "I have no doubt that they will become an important and valued member of our already stellar business community," Brian Marl, Saline's mayor, said in a release.
Privacy

Facial Recognition Could Be Coming To Police Body Cameras (defenseone.com) 179

schwit1 quotes a report from Defense One: Even if the cop who pulls you over doesn't recognize you, the body camera on his chest eventually just might. Device-maker Motorola will work with artificial intelligence software startup Neurala to build "real-time learning for a person of interest search" on products such as the Si500 body camera for police, the firm announced Monday. Italian-born neuroscientist and Neurala founder Massimiliano Versace has created patent-pending image recognition and machine learning technology. It's similar to other machine learning methods but far more scalable, so a device carried by that cop on his shoulder can learn to recognize shapes and -- potentially faces -- as quickly and reliably as a much larger and more powerful computer. It works by mimicking the mammalian brain, rather than the way computers have worked traditionally.

Versace's research was funded, in part, by the Defense Advanced Research Projects Agency or DARPA under a program called SyNAPSE. In a 2010 paper for IEEE Spectrum, he describes the breakthrough. Basically, a tiny constellation of processors do the work of different parts of the brain -- which is sometimes called neuromorphic computation -- or "computation that can be divided up between hardware that processes like the body of a neuron and hardware that processes the way dendrites and axons do." Versace's research shows that AIs can learn in that environment using a lot less code.

AI

Facebook's AI Keeps Inventing Languages That Humans Can't Understand (fastcodesign.com) 170

"Researchers at Facebook realized their bots were chattering in a new language," writes Fast Company's Co.Design. "Then they stopped it." An anonymous reader summarizes their report: Facebook -- as well as Microsoft, Google, Amazon, and Apple -- said they were more interested in AI's that could talk to humans. But when two of Facebook's AI bots negotiated with each other "There was no reward to sticking to English language," says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). Co.Design writes that the AI software simply, "learned, and evolved," adding that the creation of new languages is a phenomenon Facebook "has observed again, and again, and again". And this, of course, is problematic.

"Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought. The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another."

One of the researchers believes that that's definitely going in the wrong direction. "We already don't generally understand how complex AIs think because we can't really see inside their thought process. Adding AI-to-AI conversations to this scenario would only make that problem worse."
AI

Elon Musk Warns Governors: Regulate AI Before It's 'Too Late' (recode.net) 201

turkeydance shared a new article from Recode about Elon Musk: He's been warning people about AI for years, and today called it the "biggest risk we face as a civilization" when he spoke at the National Governors Association Summer Meeting in Rhode Island. Musk then called on the government to proactively regulate artificial intelligence before things advance too far... "Normally the way regulations are set up is a while bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry," he continued. "It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization"... Musk has even said that his desire to colonize Mars is, in part, a backup plan for if AI takes over on Earth.
Several governors asked Musk how to regulate the emerging AI industry, to which he suggested learning as much as possible about artificial intelligence. Musk also warned that society won't know how to react "until people see robots going down the street killing people... I think by the time we are reactive in AI regulation, it's too late."
Biotech

Can AI Replace Hospital Radiologists? (cnn.com) 112

An anonymous reader quotes CNN: Radiologists, who receive years of training and are some of the highest paid doctors, are among the first physicians who will have to adapt as artificial intelligence expands into health care... Today radiologists face a deluge of data as they serve patients. When Jim Brink, radiologist in chief at Massachusetts General Hospital, entered the field in the late 1980s, radiologists had to examine 20 to 50 images for CT and PET scans. Now, there can be as many as 1,000 images for one scan. The work can be tedious, making it prone to error. The added imagery also makes it harder for radiologists to use their time efficiently... The remarkable power of today's computers has raised the question of whether humans should even act as radiologists. Geoffrey Hinton, a legend in the field of artificial intelligence, went so far as to suggest that schools should stop training radiologists.
X-rays, CT scans, MRIs, ultrasounds and PET scans do improve patient care -- but they also drive up costs. And now one medical imaging startup can read a heart MRI in 15 seconds, a procedure which takes a human 45 minutes. Massachusetts General Hospital is already assembling data to train algorithms to spot 25 common scenarios. But Brinks predicts that ultimately AI will become more of a sophisticated diagnostic aid, flagging images that humans should examine more closely, while leaving radiologists with more time for interacting with patients and medical staff.
AI

Artificial Intelligence Has Race, Gender Biases (axios.com) 463

An anonymous reader shares a report: The ACLU has begun to worry that artificial intelligence is discriminatory based on race, gender and age. So it teamed up with computer science researchers to launch a program to promote applications of AI that protect rights and lead to equitable outcomes. MIT Technology Review reports that the initiative is the latest to illustrate general concern that the increasing reliance on algorithms to make decisions in the areas of hiring, criminal justice, and financial services will reinforce racial and gender biases. A computer program used by jurisdictions to help with paroling prisoners that ProPublica found would go easy on white offenders while being unduly harsh to black ones.
Transportation

The Audi A8: First Production Car To Achieve Level 3 Autonomy (ieee.org) 375

schwit1 shares a report from IEEE Spectrum: The 2018 Audi A8, just unveiled in Barcelona, counts as the world's first production car to offer Level 3 autonomy. Level 3 means the driver needn't supervise things at all, so long as the car stays within guidelines. Here that involves driving no faster than 60 kilometers per hour (37 mph), which is why Audi calls the feature AI Traffic Jam Pilot. Go ahead, Audi's saying, read your newspaper or just zone out while traffic creeps along. To be sure, the A8 also monitors the driver, even while the traffic jam persists, and continues to do so as the speed edges up over the limit. If the driver falls asleep, it'll wake him up; if it can't get his attention, it will stop the car. If you want to buy the new A8, you'll have to check whether your jurisdiction will accept it as a Level 3 car. Audi said in a statement that it will follow "a step-by-step approach" to introducing the traffic jam pilot. It plans to sell the base model in Europe this fall for 90,600 euros, or about $103,000, and to enter the United States market shortly afterwards. A model having a longer wheelbase will cost a few percent more.
AI

'World's First Robot Lawyer' Now Available In All 50 States (theverge.com) 79

An anonymous reader quotes a report from The Verge: A chatbot that provides free legal counsel using AI is now available in all 50 states starting today. This is following its success in New York, Seattle, and the UK, where it was invented by British entrepreneur Joshua Browder. Browder, who calls his invention "the world's first robot lawyer," estimates the bot has helped defeat 375,000 parking tickets in a span of two years. Browder, a junior at Stanford University, tells The Verge via Twitter that his chatbot could potentially experience legal repercussions from the government, but he is more concerned with competing with lawyers.

"The legal industry is more than a 200 billion dollar industry, but I am excited to make the law free," says Browder. "Some of the biggest law firms can't be happy!" Browder believes that his chatbot could also save government officials time and money. "Everybody can win," he says, "I think governments waste a huge amount of money employing people to read parking ticket appeals. DoNotPay sends it to them in a clear and easy to read format."

Google

After Go, Developers Are Now Building AI To Beat Us at Soccer (cnet.com) 123

After Google's AlphaGo artificial intelligence bested our best Go player, South Korea is now setting its sights on making AI that can play soccer. From a report: Hosted by the Korea Advanced Institute of Science & Technology (KAIST), the AI World Cup will see university students across South Korea developing AI programs to compete in a series of online games, reported The Korea Times. The prelims will begin in November. "The football matches will be conducted in a five on five tournament," a KAIST spokesperson told the publication on Tuesday. "Each of the five AI-programmed players in such positions as striker, defender and goalkeeper will compete with their counterparts."
Transportation

Could Technology Companies Solve Traffic Congestion? (bloomberg.com) 151

As the Indian city of Bangalore "grapples with inadequate roads, unprecedented growth and overpopulation," can technology companies find a solution? randomErr writes: Tech giants and startups are turning their attention to a common enemy: the Indian city's infernal traffic congestion. Commutes that can take hours have inspired Gridlock Hackathon for technology workers to find solutions to the snarled roads that cost the economy billions of dollars. While the prize totals a mere $5,500, it's attracting teams from global giants Microsoft Corp., Google and Amazon.com. Inc. to local startups including Ola.
Bloomberg reports that the ideas "range from using artificial intelligence and big data on traffic flows to true moonshots, such as flying cars... Other entries suggested including Internet of Things-powered road dividers that change orientation to handle changing situations. There is also a proposal for a reporting system that tracks vehicles that don't conform to the road rules..." And one hackathon official says a team "suggested building smart roads underneath the city and another has sent in detailed drawings of flying cars." Any more bright ideas -- and more importantly, do any of these solutions really have a chance of succeeding?
Crime

Google Home Ends A Domestic Dispute By Calling The Police (gizmodo.com) 256

An anonymous reader quotes Gizmodo: According to ABC News, officers were called to a home outside Albuquerque, New Mexico this week when a Google Home called 911 and the operator heard a confrontation in the background. Police say that Eduardo Barros was house-sitting at the residence with his girlfriend and their daughter. Barros allegedly pulled a gun on his girlfriend when they got into an argument and asked her: "Did you call the sheriffs?" Google Home apparently heard "call the sheriffs," and proceeded to call the sheriffs. A SWAT team arrived at the home and after negotiating for hours, they were able to take Barros into custody... "The unexpected use of this new technology to contact emergency services has possibly helped save a life," Bernalillo County Sheriff Manuel Gonzales III said in a statement.
"It's easy to imagine police getting tired of being called to citizen's homes every time they watch the latest episode of Law and Order," quips Gizmodo. But they also call the incident "a clear reminder that smart home devices are always listening."
AI

Former Oculus Exec Predicts Telepathy Within 10 Years (cnet.com) 202

Mary Lou Jepsen is a former MIT professor with 100 patents and a former engineering executive at Facebook, Oculus, Intel, and Google[x] (now called X) -- and "she hopes to make communicating telepathically happen relatively soon." An anonymous reader quotes CNET: Last year Jepsen left her job heading up display technology for the Oculus virtual reality arm of Facebook to develop new imaging technologies to help cure diseases. Shortly thereafter she founded Openwater, which is developing a device that puts the capabilities of a huge MRI machine into a lightweight wearable form. According to the startup's website, "Openwater is creating a device that can enable us to see inside our brains or bodies in great detail. With this comes the promise of new abilities to diagnose and treat disease and well beyond -- communicating with thought alone."

This week Jepsen went further and suggested a timeframe for such capabilities becoming reality. "I don't think this is going to take decades," she told CNBC. "I think we're talking about less than a decade, probably eight years until telepathy"... Jepsen, who has also spent time at Google X, MIT and Intel, says the basic idea is to shrink down the huge MRI machines found in medical hospitals into flexible LCDs that can be embedded in a ski hat and use infrared light to see what's going on in your brain. "Literally a thinking cap," Jepsen explains... The idea is that communicating by thought alone could be much faster and even allow us to become more competitive with the artificial intelligence that is supposedly coming for everyone's jobs very soon.

Jepsen tells CNBC, "If I threw [you] into an M.R.I. machine right now... I can tell you what words you're about to say, what images are in your head. I can tell you what music you're thinking of. That's today, and I'm talking about just shrinking that down."
The Media

Google Funds A Team Of Robot Journalists (theguardian.com) 43

Darren Sharp brings news about the arrival of robot journalists. The Guardian reports: Robots will help a national news agency to create up to 30,000 local news stories a month, with the help of human journalists and funded by a Google grant. The Press Association has won a €706,000 ($800,779 or £621,000) grant to run a news service with computers writing localised news stories. The national news agency, which supplies copy to news outlets in the U.K. and Ireland, has teamed up with data-driven news start-up Urbs Media for the project, which aims to create "a stream of compelling local stories for hundreds of media outlets"... PA's editor-in-chief, Peter Clifton, said journalists will still be involved in spotting and creating stories and will use artificial intelligence to increase the amount of content. He said: "Skilled human journalists will still be vital in the process, but Radar [the Reporters And Data And Robots project] allows us to harness artificial intelligence to scale up to a volume of local stories that would be impossible to provide manually." Journalists will create "detailed story templates" for articles about crime, health, and employment, for example, then use natural language software to create multiple versions to "scale up the mass localization."

Slashdot Top Deals