CES 2018: Robotics Roundup

ces-2018-robotics-roundup

At CES 2018, the show floor was filled with robots of all shapes and sizes, from tiny family robots, humanoid enterprise robots, toy robots through to robot taxis and autonomous cars. Barring a few surprises and interesting form factors, the robotics showcase did feel like it was mostly stuck in 2016. As mentioned in our AI roundup, there are definite signs of the “recognition stack” maturing as there were several companies offering face, object, and voice recognition, including software and integrated software-hardware development boards, or in the case of autonomous transport, full-vehicle reference designs.  The hope is that the robotics market will attract more developers as the software and hardware stacks start to mature, or possibly we might see a new type of reinforcement-based learning systems where robots will self-learn rather than be coded. Whatever the future holds, robots are here to stay at CES, although it would be nice to see some of the hype being replaced with robots that can bring more compelling value to both consumers and business.

Consumer and Enterprise Robots Still Need to Grow Up, While Experimentation Continues

For the most part, there were no major surprises in the consumer and enterprise robot space. Most of the companies showcasing solutions had more of the same, with a large representation from China as companies like Sanbot and UBTECH continue to expand their markets in China and beyond, with some activity in Europe and North America. UBTECH, which made its name in toy robots, is now expanding into the enterprise market and had some interesting designs on offer like the UBTECH Walker that has no arms or the Cruzr, which has the mid-torso and head fused into one large screen. This shows that companies are experimenting with the form factor of humanoid robots, and possibly moving away from the humanoid form, in favor of functionality. The much-awaited Jibo robot had a booth at the show, and although it has faced numerous delays, there is something compelling about its design and human-like neck movements.

Another notable example of experimentation was Segway’s Loomo, which doubles as an autonomous robot and a personal transportation system. The Loomo is being positioned as both a consumer and enterprise robot, testing the market to see where it sticks. The Loomo is a robot on two wheels, making it highly mobile and faster than most other enterprise robots, and it uses Segway’s proprietary balance technology that made the company popular in the first place. Segway might have something very powerful on its hands, especially as a surveillance and security robot solution, as it has a top speed of 18 kph (11 mph) and can speedily get to a security alert, possibly even faster than a human, as it navigates the best route and avoids obstacles on the fly. Much of the navigation and obstacle avoidance is still not built in, but this will be rolled out in future software updates. However, in general, robots on two wheels is where the robotic form factor is really converging, for the most part; not just for security, but also for hotels, retail stores, doctors’ reception areas, etc. The recent “nightmare inducing” robot from Boston Dynamics that has been making the rounds of the internet is also on two wheels. Having a two-wheeled robot gives it unique humanoid abilities, and apart from having robots achieve human abilities in terms of vision and language, two-wheeled swift movement (when done in the right way) gets robots like Loomo half-way across the “uncanny valley.” Of course, the uncanny valley could grow deeper and wider if the two-wheeled Boston Dynamics robot ever sets foot in the real world!

And then, there were robots like Sony’s relaunch of the Aibo, its well-known robot dog. While it looks much more believable than its previous incarnation and takes advantage of the deep learning-based vision and voice recognition advances, the price tag at a hefty $1,700 makes it a difficult consumer proposition. LG had a disastrous launch with its malfunctioning Cloi robot during its CES keynote, as it failed to take voice commands. There were similar issues with Jibo at its booth, and with Buddy at Blue Frog Robotics. Failure to take commands has been a common issue with family robots, which can be traced back to demos from back in 2016 when some of these robots first appeared.

It is both surprising and shocking that robot companies are still struggling with voice recognition for robots in 2018, when the voice recognition stack is at a point of maturity. This also shows how far ahead Amazon, Google, and Baidu are from the others because they have invested in advancing the AI algorithms, which gives them an edge.

Coding Robots versus Self-Learning Robots

Misty Robotics, a spinoff from Sphero, the robotic toy maker, showcased its first developer edition robot kit. The Misty I Developer Edition is being opened to a highly select group of developers, who will be chosen based on their lack of robotic programming skills. Misty hopes to democratize the robotic developer ecosystem and provide an open platform that would make developing skills for robots as easy as designing a web or mobile application. Misty II, which is the second version of the developer kit, will take in feedback from the select group of developers, which should then allow for a broader launch. Misty truly believes that the road to success for consumer robots will follow a path similar to that of the personal computer (PC) industry, where a select group of hobbyists and tinkerers initially started to expand its capabilities before reaching prime time, which led to successful companies like Microsoft and Apple.

However, one thing Misty might be overlooking is the rapid advances that are occurring in reinforcement learning technologies, which is one of the ways that robots can learn skills. Rather than coding robots line by line, which is the traditional way of giving commands or feeding logic to a system, reinforcement learning allows the robot to learn by experience, giving it a reward function and a goal. Typically, reinforcement learning is done in simulated environments like games, although simulation software for industrial and control systems are starting to now see usage of reinforcement learning as well. At the show, another robotic company called Kindred talked about how it is using reinforcement learning to teach its robots how to pick and sort objects, doing the learning in live environments, but with humans assisting the robot as it learns and makes mistakes. Kindred essentially offers collaborative robot solutions using reinforcement learning, which is an interesting theme that one needs to keep an eye on for the future.

The question is whether the future of robots is coding them like Misty, creating developer editions and software frameworks that try and apply standard software development techniques to robots, or if it is with companies like Kindred that are suggesting a move away from standard coding of robots and training them using reinforcement learning, either in a simulation or in real time with human assistance. Only time will tell.

Geofenced (and Boring) Robot Taxis and Shuttles Are Here

Last year, CES doubled as a preview to the Detroit Auto Show with the North Halls showcasing the latest in autonomous car technology. This year, the autonomous transport theme continued, but felt bigger and more mature. Although everyone is looking forward to 2020, when we are expected to see some of the first Level 4 (fully autonomous) cars hit the road, the focus this year was on the near term. Autonomous taxis and shuttles are seen as more near-term and practical implementations to get autonomous technology on the road today. Taxis and shuttles can be geofenced to a specific neighborhood or city, allowing them to be regulated. Navya, a French autonomous transport company is one of the pioneers in this space, with approximately 60 shuttles already operating across sites in Europe and the United States.

At CES, Navya was demoing its fully autonomous Autonom Cab and offering rides. Although the shuttle was operating in an enclosed parking lot, moving across a fixed loop, it did offer a glimpse of what it would be like to ride in a robot taxi from the future. It was equipped with a mobile app-controlled entertainment system and location information on giant screens. The Autonom Cab can seat six people, with two sets of three facing each other, which felt like a much more roomy and futuristic version of the London Black Cab, which seats five. Navya also had its Autonom Shuttle offer rides in the Fremont district of Las Vegas throughout the show. Navya is also partnering with Keolis to roll out autonomous cab and shuttle technology across cities. Keolis is one of the largest solution and service providers for public transport solutions in France and has a growing presence in Europe and North America.

Ride hailing companies Lyft and Aptiv were also offering a self-driving taxi service in Las Vegas during the show. Although this was a Level 3 autonomous car, with a safety driver behind the wheel, it did venture out onto the main roads and offered passengers rides to 20 preset destinations. The consensus among people who experienced the robot taxi firsthand was that it felt like having your grandma drive you, with the complaint that the AI was programmed as an overcautious driver. This raised interesting questions about whether there could be a “cautiously aggressive” version of the robot taxi on offer in the future, which might get you from A to B in the style of a New York or Mumbai cabbie, but without hitting anyone. It is hard to imagine the regulators green lighting this! In other words, get used to boring rides in robot taxis and shuttles. This also explains why we need better entertainment systems in autonomous cars.

 

Comments are closed.