Working Dog[bot]: Robot Loaded with Sensors and Cameras

May 24, 2023
Ghost Robotics’ Vision 60 quadruped robot can autonomously navigate difficult terrain for military and security missions.

This video appeared in Electronic Design and has been published here with permission.

Check out our AUVSI Xponential 2023 coverage.

Ghost Robotics is going to the dogs. The Vision 60 robot dog that is (Fig. 1). The Vision 60 is a quadruped unmanned ground vehicle (Q-UGV).

The Vision 60 is an all-weather ground drone that employs a proprietary blind-mode control methodology that allows for movement over rough terrain. It employs cameras and 3D imaging, but the robot can operate without these advantages. It can adjust to slippery and uneven ground, and right itself should it fall over. 

The IP67-rated system weighs in at 51 kg with support for a payload up to 10 kg. The electric vehicle has a top speed of 3 m/s and its three-hour runtime lets the robot move up to 10 km. The robot can operate for up to a day if it's not moving all the time. 

The smarts are provided by an NVIDIA Xavier module. It's designed for fast deployment with a setup or teardown time of under 15 minutes. And it can automatically find a charging station. 

The robot can be controlled remotely or work autonomously. An array of sensors provides simultaneous localization and mapping (SLAM) so that it can navigate obstacles such as stairs more easily, as well as avoid collisions. It can handle degraded inputs and uses force feedback from its legs. The control system is designed to provide feedback to an operator. Operators can set up the robot to use waypoint navigation. The robots also have follow-the-leader capability. 

Links

The video transcript below has been edited for clarity. 

Bill Wong: Tell us a little bit about the robot that we’re going to be taking a look at. 

James Honicker: This is the Vision 60 robot, our base robot that we sell to both civilian and DoD customers. It's all the same robot model. The biggest use case that we see is this idea of persistent security. The robot can go navigate fence lines or perimeters that you used to have people watch all the time. You can just have it do that 24/7 with autonomous planning and wireless charging systems.  

The robot can detect people, cars, or any other obstacle that you want to set out for it to detect. When it sees that, it can send an alert back to the security center saying, ‘Hey, I saw a person there, I saw a vehicle here, you should go check it out.’ 

The other big use case for the robot is this tactical deployment. You have a man controlling the robot and you send it in to scenarios where, for example, EOD or explosives are found or law enforcement or fire search and rescue can use it, or you send the robot into dangerous areas that you don't want to put humans’ life at risk. 

The robot has a wide array of sensor packages that comes with it out of the box, beginning with five RGB cameras with different ones being stereoscopic. You can see about 170, 160 degrees from each camera, so you always have complete 360 view of the robot.  

Below the cameras on the front of the robot, there’s an Intel 435 depth sensor that lets it map the ground plane. When you want to do that, that helps it more elegantly climb things such as stairs or very rigid structured terrains (Fig. 1). 

There's also built-in two-way audio on all the sensor heads so you can listen and speak through the robot in case you want to talk to somebody that you find out in the field, it's hostage negotiation, or any other use case where you just need to communicate with somebody in a remote location. 

The robot is agnostic to a lot of external sensor packages. It's really designed to incorporate anybody's off-the-shelf or industrial sensors. All the users need to do is mount the camera and plug it into the USB slot on the top of the robot. We support a very wide array of streaming protocols from RTSP, H.265, H.264 encoding, and pretty much everything is standard across the industry. Next to the camera you have one of the two GPS antennas on the robot with the GPS module internal. 

On top of the robot, there is a wide array of mounting arrays or mounting holes and rail systems that you can mount like picatinny rail or other rail mounts that add a wide variety of payloads that, you know, have quick release capabilities (Fig. 2). 

Vision 60 is comms agnostic, so out of the box there's no radio, but you have Wi-Fi antennas and LCE antennas mounting in the back to the robot. On top, you have LAN, WAN, and a straight line onto the onboard computer. With that said, we can now integrate three different IP devices on the robot or USB or SMA camera lines.  

The robot also has two power ports, one for unregulated 42-V power supply and another one that is 12- and 24-V regulated power with the ability to adjust that internally on the robot. On the belly of the robot, there’s two more depth sensors. In the middle section, the compute box is housed and below that is the battery for the robot.  

There is a wall power chargeable battery, which can fully charge in about an hour and a half, and they're able to deplete the battery in about three and a half hours at full speed. If you just use it in mixed use, like standing, walking a little bit, and pivoting, he'll get 8 to 9 hours of power. If it’s laying prone, just acting as a sensor package or a relay comms relay station, he'll get 20 to 24 hours of power out of the battery. 

The legs of the robot have three degrees of freedom with 12 different joints that pivot in real-time to keep the robot balanced, moving around, and navigating. 

Everything from this point on down is mechanical and is field serviceable. With four bolts, you can take the entire leg module out, throw a new leg in, and it's good to go (Fig. 3). 

Bill Wong: You were talking before about how this robot actually uses other sensors to figure out the terrain and so on. Tell us a little bit more how that works. 

James Honicker: So instead of relying on vision, the robot can, in a sense, literally feel the ground. Every time the feet strike the ground, all its motors will get a current spike, and we can measure that current spike in combination with the onboard IMU to calculate how the robot has struck the ground, what its position is in terms of yaw and pitch, and how it's doing in balance. 

It can use all that information in real-time to do very quick controlled calculations to balance and stabilize itself. If the robot does flip over, it has the capability to invert all its legs and operate in an upside-down seat. If it had specialized payloads on top that needed to stay on top, the operator can tell the robot to roll over and it will go back down, flip itself, and stand back up.

Check out more of our AUVSI Xponential 2023 coverage.

Sponsored Recommendations

Getting Started with Python for VNA Automation

April 19, 2024
The video goes through the steps for starting to use Python and SCPI commands to automate Copper Mountain Technologies VNAs. The process of downloading and installing Python IDC...

Introduction to Copper Mountain Technologies' Multiport VNA

April 19, 2024
Modern RF applications are constantly evolving and demand increasingly sophisticated test instrumentation, perfect for a multiport VNA.

Automating Vector Network Analyzer Measurements

April 19, 2024
Copper Mountain Technology VNAs can be automated by using either of two interfaces: a COM (also known as ActiveX) interface, or a TCP (Transmission Control Protocol) socket interface...

Guide to VNA Automation in MATLAB Using the TCP Interface

April 19, 2024
In this guide, advantages of using MATLAB with TCP interface is explored. The how-to is also covered for setting up automation language using a CMT VNA.