In this episode of AI Adventures, Yufeng explains how to build models for more complex datasets, using TensorFlow to build deep neural networks!
In this episode of AI Adventures, Yufeng takes us on a tour of TensorBoard, the visualizer built into TensorFlow, to visualize and help debug models.
Industry 4.0 – defined by breakthroughs in emerging technologies such as Robotics, Artificial Intelligence, Internet of Things, 3D Printing, Autonomous Vehicles and Quantum Computing – will yet again create a massive shift. We are on the definitive cusp of the 4th Industrial Revolution. Earlier industrial revolutions ushered mechanization of previously manual tasks, leading to a huge shift in production output and increased operational efficiencies while creating a new range of skills for the workforce to master. According to Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, the transformation driven through this technological revolution will be unlike anything…
And how does gas pressures determine effects when high voltage is applied across them? Watch the video that was important in the exploration of cathode rays that eventually led to the discovery of the electron by JJ Thomson
A fleet of autonomous “Smart Carts” – high-tech, 3D printed, low-speed electric vehicles – could one day zip around the University of Michigan North Campus, taking students, professors and staff to class, labs and offices while also serving as one of the first test beds for on-demand autonomous transit. In an early step toward that goal, U-M researchers received a custom, 3D-printed vehicle from technology company Local Motors. Over the next year, Edwin Olson, an associate professor of Electrical Engineering and Computer Science who leads the project and his team of U-M researchers will develop autonomy capabilities and build…
Michigan researchers have introduced DjiNN and Tonic Suite, open source deep learning tools that are designed to help in the development of deep learning as a service.
Sirius, an open-source digital assistant created at Michigan, can serve as a powerful tool for researchers to use in modeling the data center workloads of the future, which will be based heavily on image and voice processing and Q&A services, as opposed to text searches. It can also help researchers to improve the digital assistant.
Will advances in artificial intelligence bring us closer to having robots in our homes? A Michigan Engineering expert weighs in on the goals and outlook for research in making robots that think like humans. The idea of artificial intelligence is rooted in creating a mind that has the same flexibilities and capabilities of a human mind — or even more. Although research has been advanced in a variety of areas of human intelligence, such as voice and face recognition, the next question will be how to integrate the separate aspects into a fully capable brain, says U-M professor Satinder…
Rosie, an interactive task learning robot built on the Soar cognitive architecture, learns the rules of a foam block Tower of Hanoi through situated interactive instruction and then solves the puzzle.
Prof. Olson’s research includes finding ways for robots to sense and understand their environment while coping with uncertainty and ambiguity. The perception problem is central to a variety of practical applications, from indoor robots that can lead tours or deliver mail to autonomous cars that can navigate urban environments. His work includes both fundamental algorithm research (optimization, state estimation, classification) and system building.
Prof. Ben Kuipers, CSE graduate student Collin Johnson, and ME graduate student Jong Jin Park have created Vulcan, an intelligent robotic wheelchair. Vulcan learns the spatial structure of the environment it moves through and it uses that knowledge to plan and follow routes from place to place. Robotic wheelchairs will benefit people who need a wheelchair, but are unable to use one because of multiple disabilities.
By studying videos from high-stakes court cases, University of Michigan researchers are building unique lie-detecting software based on real-world data. Their prototype considers both the speaker’s words and gestures, and unlike a polygraph, it doesn’t need to touch the subject in order to work. In experiments, it was up to 75 percent accurate in identifying who was being deceptive (as defined by trial outcomes), compared with humans’ scores of just above 50 percent. The system might one day be a helpful tool for security agents, juries and even mental health professionals.