Boston dynamic spot competition

Posted on May 1, 2022 by Zach Lambert ‐ 3 min read

Boston dynamic spot competition

Through working at QinetiQ, I got the opportunity to take part in the “robot dog olympics”, a two-day hackathon using boston dynamics’ spot organised by MOD.

Using spot’s (python API)[https://dev.bostondynamics.com/], we were tasked with a number of loosely olympics-themed challenges:

  • 100m race: Have spot run 100m in the shortest amount of time.
  • Bowling: Have spot pick up a ball, throw it and knock down a number of pins.
  • Choreography: Have spot perform a dance.
  • Colour identification 1: From a ball pit of balls of various colours, have spot automatically remove all balls of a specific colour.
  • Object identification 2: Have spot identify a selected object on the ground then pick it up automatically. This was selected from a number of simple, visually distinct objects, such as balls, cones, rings, although there were also objects of the same type, but different colour.

My team was able to complete all tasks and ended up taking first place!

These tasks all sound very difficult, especially only having two days, but spot itself could already do very complicated things itself. The challenge was learning the API and stringing together an appropriate series of commands in a python script.

There were two key capabilities spot provided nice APIs for which we made heavy use of.

  1. Navigate to a given pose in a reference frame: For a given reference frame (usually the odom frame), provide a goal pose, and spot will automatically move to it. This was used in the 100m race as well as any other required motion.
  2. Move the gripper to a given pose: Again, provide a goal pose for a given frame, this time the end-effector frame. A camera was in the centre of the gripper, so this was mainly used to move the gripper to a pose where it could view a scene and identify objects.
  3. Pick up object at selected point: For a given point on a camera image, pick up the object at this point. This would identify the object, plan an appropriate grasp motion, and execute the motion.
  4. Execute a given dance routine: Provide a file describing a desired dance routine, and spot will execute it.

In particular, the command for “pick up this object” was very impressive. It could handle a variety of objects and reliably planned a suitable grapsing motion and didn’t drop the object.

The two hardest challenges were probably the bowling and second object identification task. For object identification, there weren’t any built-in routines for object identification, so one of my team members had to write some custom python code for doing this using OpenCV. For the bowling, although spot could easily pick up objects, there wasn’t any command for “throw”. Additionally, the arm motion was slow and smooth so couldn’t generate enough velocity to throw a ball. The best strategy was, rather comedically, the following:

  • Have spot pick up the ball and move the gripper into a “forward-looking” pose
  • Open the gripper, such that the ball was stationary inside the gripper, but was ready to roll out.
  • Have spot run forward itself, then stop. The ball would remain inside the gripper while running, and would exit when spot stopped.
  • Position the gripper, such that when the ball exits, it would bounce off the front of spot’s body and carry on forward to knock over pins. This relied on a lot of luck, and on one of our three attempts at the challenge, it luckily worked quite well and knocked over a large number of bins. It helped that the pins were plastic and light, so the ball didn’t need to roll into them with much force.