Need autonomous driving training data? ›

What Happened in AI This Month: March 2017 Edition

What Happened in AI This Month: March 2017 Edition

Time for Mighty AI’s monthly AI news roundup! Here’s what happened in March:

  • Retale, a mobile-tech company, commissioned a survey among millennials on “Voice First” and chatbots, and the results were surprising. Read.
  • Facebook developed algorithms to detect suicidal warning signs in users’ posts. Read.
  • ReadiITQuik looked at how AI will shape the future of banking. Read.
  • Motherboard explored how autonomous vehicles could reduce racial bias in police pull-overs. Read.
  • Co.Design covered MIT Ph.D. candidate and CivilServant creator J. Nathan Matias’s experiment on influencing algorithms via human feedback on algorithmic output. Read.
  • The New York Times detailed the monumental—but necessary—task of creating an always-updated digital road map for autonomous cars. Read.
  • The Atlantic explained how machine learning and AI could soon help us see (way) far-out places in space. Read.
  • Carlos E. Perez, Editor of Intuition Machine, published a brain-buster of an article on deep meta learning or “Learning to Learn.” Read.
  • The Exponential View podcast did an interview with Professor Jeffrey Sachs, an economic development expert and advisor to the UN and multiple governments, covering automation, jobs, inequality, universal basic income, and more. Listen.
  • MIT Technology Review looked at the potential role of AI in the criminal justice system, presenting a study showing an algorithm trained to predict “flight risk” of defendants awaiting trial performed better than human judges. Read.
  • Quartz covered how MIT researchers hooked a robot up to a brain monitor, allowing a human to correct the robot’s behavior by…thinking. Read.
  • TechCrunch reported Google is acquiring Kaggle. Read.
  • Ars Technica highlighted that crazy case of an Amazon Echo possibly recording the sounds of a murder. Read.
  • Google demoed its Visual Intelligence API, a new object recognition technology for video. Read.
  • “Jay,” a popular AI writer, explored how AI will change the architectural design process. Read.
  • Ines Montani, co-founder of Explosion AI, wrote a fantastic critical analysis on how the industry thinks about, positions, and messages AI—and all the ways we’re getting it wrong. Read.
  • Benedict Evans discussed the reasons voice (as in, voice input for natural language processing) is so hot right now, and the problems that come along with it. Read.
  • MIT Technology Review explained Facebook AI Chief Yann LeCun’s thoughts on how computer vision’s potential could include software that learns common sense by watching videos. Read.
  • Baidu said it plans to eventually spin off a self-driving car unit. Read.
  • IEEE Spectrum examined how Drive.ai is using deep learning for autonomous driving. Read.
  • Intel bought Mobileye. Read.
  • DeepMind explained progressive (or continual) learning in neural networks, and how enabling it is key to building more intelligent technologies. Read.
  • Ben Medlock, co-founder of SwiftKey, argued the industry’s focus on replicating the brain’s intelligence and function is short-sighted because “we think with our whole body, not just with the brain.” Read.
  • Sebastian Huempfer, Communications Manager at Echobox, reviewed 100 things robots and programs have learned to do so far—in just this year. Read.
  • Nvidia announced a partnership with Bosch, which will sell Nvidia’s self-driving platform. Read.
  • A trio of data science/engineering/product pros helpfully outlined several techniques for accurately interpreting machine learning results. Read.
  • Researchers at Brown University trained a robot to ask clarifying questions and communicate points of confusion when it’s tasked with fetching objects. Read.
  • Roboticist and visiting researcher at Open AI Igor Mordatch and associates built a virtual world in which chatbots created their own language to communicate with each other. Read.
  • BMW said it will produce a self-driving car with Level 5 autonomy by 2021. Read.
  • University of Rochester researchers created an algorithm that detects racist code words using contextual cues. Read.
  • Mike Loukides at O’Reilly wondered intelligently if it’s possible for ethics to be computable, and if so, what that would look like. Read.
  • Scientific American examined how pedestrian behaviors—mainly, walking out in front of cars at crosswalks—could pose real (traffic) challenges for autonomous vehicles. Read.
  • MIT Technology Review showed us how startup Soul Machines is giving chatbots “expressive digital faces” that read and react to human facial movements. See and read.
  • Andrew Ng left Baidu. Read.
  • MIT Technology Review reviewed (ha) the many encouraging ways machine learning is helping folks with disabilities. Read.
  • The Atlantic told us about wonderful killer robots—okay, starfish-killer robots—that are working to protect coral reefs from the reef-eating crown-of-thorns starfish. Read.
  • Kathryn Hume of the AI research consultancy Fast Forward Labs outlined five big narratives in AI that are leading us down the wrong paths and distracting us from real problems. Read.
  • O’Reilly did an interview with Geoff Hinton, Google engineering fellow and emeritus distinguished professor at the University of Toronto, on how further study in neuroscience will help advance AI. Read.

image credit: libreshot.com