A method of controlling a robot includes: receiving, by a computing device, from one or more sensors, sensor data reflecting an environment of the robot, the one or more sensors configured to have a field of view that spans at least 150 degrees with respect to a ground plane of the robot; providing, by the computing device, video output to an extended reality (XR) display usable by an operator of the robot, the video output reflecting the environment of the robot; receiving, by the computing device, movement information reflecting movement by the operator of the robot; and controlling, by the computing device, the robot to move based on the movement information.
B25J 11/00 - Manipulators not otherwise provided for
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
Aspects of the present disclosure provide techniques to undo a portion of a mission recording of a robot by physically moving the robot back through the mission recording in reverse. As a result, after the undo process is completed, the robot is positioned at an earlier point in the mission and the user can continue to record further mission data from that point. The portion of the mission recording that was performed in reverse can be omitted from subsequent performance of the mission, for example by deleting that portion from the mission recording or otherwise marking that portion as inactive. In this manner, the mistake in the initial mission recording is not retained, but the robot need not perform the entire mission recording again.
Techniques are described that determine motion of a robot's body that will maintain an end effector within a useable workspace when the end effector moves according to a predicted future trajectory. The techniques may include determining or otherwise obtaining the predicted future trajectory of the end effector and utilizing the predicted future trajectory to determine any motion of the body that is necessary to maintain the end effector within the useable workspace. In cases where no such motion of the body is necessary because the predicted future trajectory indicates the end effector will stay within the useable workspace without motion of the body, the body may remain stationary, thereby avoiding the drawbacks caused by unnecessary motion described above. Otherwise, the body of the robot can be moved while the end effector moves to ensure that the end effector stays within the useable workspace.
Methods and apparatuses for detecting one or more objects (e.g., dropped objects) by a robotic device are described. The method comprises receiving a distance-based point cloud including a plurality of points in three dimensions, filtering the distance-based point cloud to remove points from the plurality of points based on at least one known surface in an environment of the robotic device to produce a filtered distance-based point cloud, clustering points in the filtered distance-based point cloud to produce a set of point clusters, and detecting one or more objects based, at least in part, on the set of point clusters.
A virtual bumper configured to protect a component of a robotic device from damage is provided. The virtual bumper comprises a plurality of distance sensors arranged on the robotic device and at least one computing device configured to receive distance measurement signals from the plurality of distance sensors, detect, based on the received distance measurement signals, at least one object in a motion path of the component, and control the robot to change one or more operations of the robot to avoid a collision between the component and the at least one object.
Methods and apparatus for determining a grasp strategy to grasp an object with a gripper of a robotic device are described. The method comprises generating a set of grasp candidates to grasp a target object, wherein each of the grasp candidates includes information about a gripper placement relative to the target object, determining, for each of the grasp candidates in the set, a grasp quality, wherein the grasp quality is determined using a physical-interaction model including one or more forces between the target object and the gripper located at the gripper placement for the respective grasp candidate, selecting, based at least in part on the determined grasp qualities, one of the grasp candidates, and controlling the robotic device to attempt to grasp the target object using the selected grasp candidate.
Some robotic arms may include vacuum-based grippers. Detecting the seal quality between each vacuum assembly of the gripper and a grasped object may enable reactivation of some vacuum assemblies, thereby improving the grasp. One embodiment of a method may include activating each of a plurality of vacuum assemblies of a robotic gripper by supplying a vacuum to each vacuum assembly; determining, for each of the activated vacuum assemblies, a first respective seal quality of the vacuum assembly with a first grasped object; deactivating one or more of the activated vacuum assemblies based, at least in part, on the first respective seal qualities; and reactivating each of the deactivated vacuum assemblies within a reactivation interval.
Disclosed herein are systems and methods directed to an industrial robot that can perform mobile manipulation (e.g., dexterous mobile manipulation). A robotic arm may be capable of precise control when reaching into tight spaces, may be robust to impacts and collisions, and/or may limit the mass of the robotic arm to reduce the load on the battery and increase runtime. A robotic arm may include differently configured proximal joints and/or distal joints. Proximal joints may be designed to promote modularity and may include separate functional units, such as modular actuators, encoder, bearings, and/or clutches. Distal joints may be designed to promote integration and may include offset actuators to enable a through-bore for the internal routing of vacuum, power, and signal connections.
B25J 9/10 - Programme-controlled manipulators characterised by positioning means for manipulator elements
B25J 9/08 - Programme-controlled manipulators characterised by modular constructions
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
B25J 9/06 - Programme-controlled manipulators characterised by multi-articulated arms
B25J 9/04 - Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian co-ordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical co-ordinate type or polar co-ordinate type
Consistent connection strategies for coupling accessories to a robot can help achieve certain objectives, e.g., to tolerate and correct misalignment during coupling of the accessory. In some embodiments, the connection strategy may enable certain accessories to connect to certain sides of a robot. When connected, an accessory may be rigid in yaw, lateral motion, and fore/aft motion, while remaining unconstrained in roll and pitch as well as vertical motion. A sensor may enable detection of the accessory, and a mechanical fuse may release the accessory when a force threshold is exceeded. A mechanical coupler of an accessory may include two connectors, each of which includes a receiving area configured to receive a pin on the robot and a latch configured to retain the pin within the receiving area. The pins (and the receiving areas) may be differently sized, and may be differently arranged.
Methods and apparatus for object detection and pick order determination for a robotic device are provided. Information about a plurality of two-dimensional (2D) object faces of the objects in the environment may be processed to determine whether each of the plurality of 2D object faces matches a prototype object of a set of prototype objects stored in a memory, wherein each of the prototype objects in the set represents a three-dimensional (3D) object. A model of 3D objects in the environment of the robotic device is generated using one or more of the prototype objects in the set of prototype objects that was determined to match one or more of the 2D object faces.
Method and apparatus for object detection by a robot are provided. The method comprises analyzing using a set of trained detection models, one or more first images of an environment of the robot to detect one or more objects in the environment of the robot, generating at least one fine-tuned model by training one or more of the trained detection models in the set, wherein the training is based on a second image of the environment of the robot and annotations associated with the second image, wherein the annotations identify one or more objects in the second image, updating the set of trained detection models to include the generated at least one fine-tuned model, and analyzing using the updated set of trained detection models, one or more third images of the environment of the robot to detect one or more objects in the environment.
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
12.
SYSTEMS AND METHODS FOR CONTROLLING MOVEMENTS OF ROBOTIC ACTUATORS
An electronic circuit comprises a charge storing component, a set of one or more switching components coupled to the charge storing component, and an additional switching component coupled to each of the one or more switching components in the set. The additional switching component is configured to operate in a first state or a second state based on a received current or voltage. The first state prevents current to flow from the charge storing component to each of the one or more switching components in the set and the second state allows current to flow from the charge storing component to each of the one or more switching components in the set.
H02P 3/18 - Arrangements for stopping or slowing electric motors, generators, or dynamo-electric converters for stopping or slowing an individual dynamo-electric motor or dynamo-electric converter for stopping or slowing an ac motor
H02P 29/024 - Detecting a fault condition, e.g. short circuit, locked rotor, open circuit or loss of load
H02P 29/028 - Detecting a fault condition, e.g. short circuit, locked rotor, open circuit or loss of load the motor continuing operation despite the fault condition, e.g. eliminating, compensating for or remedying the fault
13.
NONLINEAR TRAJECTORY OPTIMIZATION FOR ROBOTIC DEVICES
Systems and methods for determining movement of a robot are provided. A computing system of the robot receives information including an initial state of the robot and a goal state of the robot. The computing system determines, using nonlinear optimization, a candidate trajectory for the robot to move from the initial state to the goal state. The computing system determines whether the candidate trajectory is feasible. If the candidate trajectory is feasible, the computing system provides the candidate trajectory to a motion control module of the robot. If the candidate trajectory is not feasible, the computing system determines, using nonlinear optimization, a different candidate trajectory for the robot to move from the initial state to the goal state, the nonlinear optimization using one or more changed parameters.
A computer-implemented method executed by data processing hardware of a robot causes the data processing hardware to receive sensor data associated with a door. The data processing hardware determines, using the sensor data, door properties of the door. The door properties can include a door width, a grasp search ray, a grasp type, a swing direction, or a door handedness. The data processing hardware generates a door movement operation based on the door properties. The data processing hardware can execute the door movement operation to move the door. The door movement operation can include pushing the door, pulling the door, hooking a frame of the door, or blocking the door. The data processing hardware can utilize the door movement operation to enable a robot to traverse a door without human intervention.
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
The operations of a computer-implemented method (1000) include obtaining a topological map of an environment including a series of waypoints and a series of edges (1002). Each edge topologically connects a corresponding pair of adjacent waypoints. The edges represent traversable routes for a robot. The operations include determining (1004), using the topological map and sensor data captured by the robot, one or more candidate alternate edges. Each candidate alternate edge potentially connects a corresponding pair of waypoints that are not connected by one of the edges. For each respective candidate alternate edge, the operations include determining (1006), using the sensor data, whether the robot can traverse the respective candidate alternate edge without colliding with an obstacle and, when the robot can traverse the respective candidate alternate edge, confirming (1008) the respective candidate alternate edge as a respective alternate edge. The operations include updating (1010), using nonlinear optimization and the confirmed alternate edges, the topological map.
A computer-implemented method executed by data processing hardware of a robot causes the data processing hardware to perform operations including obtaining a topological map including waypoints and edges. Each edge connects adjacent waypoints. The waypoints and edges represent a navigation route for the robot to follow. Operations include determining, that an edge that connects first and second waypoints is blocked by an obstacle. Operations include generating, using image data and the topological map, one or more alternate waypoints offset from one of the waypoints. For each alternate waypoint, operations include generating an alternate edge connecting the alternate waypoint to a waypoint. Operations include adjusting the navigation route to include at least one alternate waypoint and alternate edge that bypass the obstacle. Operations include navigating the robot from the first waypoint to an alternate waypoint along the alternate edge connecting the alternate waypoint to the first waypoint.
A computer-implemented method when executed by data processing hardware causes the data processing hardware to perform operations. The operations include detecting a candidate support surface at an elevation less than a current surface supporting a legged robot. A determination is made on whether the candidate support surface includes an area of missing terrain data within a portion of an environment surrounding the legged robot, where the area is large enough to receive a touchdown placement for a leg of the legged robot. If missing terrain data is determined, at least a portion of the area of missing terrain data is classified as a no-step region of the candidate support surface. The no-step region indicates a region where the legged robot should avoid touching down a leg of the legged robot.
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
G05D 1/02 - Control of position or course in two dimensions
18.
AUTONOMOUS AND TELEOPERATED SENSOR POINTING ON A MOBILE ROBOT
A computer-implemented method executed by data processing hardware of a robot causes the data processing hardware to perform operations. The operations include receiving a sensor pointing command that commands the robot to use a sensor to capture sensor data of a location in an environment of the robot. The sensor is disposed on the robot. The operations include determining, based on an orientation of the sensor relative to the location, a direction for pointing the sensor toward the location, and an alignment pose of the robot to cause the sensor to point in the direction toward the location. The operations include commanding the robot to move from a current pose to the alignment pose. After the robot moves to the alignment pose and the sensor is pointing in the direction toward the location, the operations include commanding the sensor to capture the sensor data of the location in the environment.
A computer-implemented method (300) when executed by data processing hardware causes the data processing hardware to perform operations. The operations include receiving a navigation route for a mobile robot (302). The navigation route includes a sequence of waypoints connected by edges (302). Each edge corresponds to movement instructions that navigate the mobile robot between waypoints of the sequence of waypoints (302). While the mobile robot is traveling along the navigation route, the operations include determining (304) that the mobile robot is unable to execute a respective movement instruction for a respective edge of the navigation route due to an obstacle obstructing the respective edge, generating (306) an alternative path to navigate the mobile robot to an untraveled waypoint in the sequence of waypoints, and resuming (308) travel by the mobile robot along the navigation route. The alternative path avoids the obstacle.
A method of estimating one or more mass characteristics of a payload manipulated by a robot includes moving the payload using the robot, determining one or more accelerations of the payload while the payload is in motion, sensing, using one or more sensors of the robot, a wrench applied to the payload while the payload is in motion, and estimating the one or more mass characteristics of the payload based, at least in part, on the determined accelerations and the sensed wrench.
An imaging apparatus includes a structural support rigidly coupled to a surface of a mobile robot and a plurality of perception modules, each of which is arranged on the structural support, has a different field of view, and includes a two-dimensional (2D) camera configured to capture a color image of an environment, a depth sensor configured to capture depth information of one or more objects in the environment, and at least one light source configured to provide illumination to the environment. The imaging apparatus further includes control circuitry configured to control a timing of operation of the 2D camera, the depth sensor, and the at least one light source included in each of the plurality of perception modules, and at least one computer processor configured to process the color image and the depth information to identify at least one characteristic of one or more objects in the environment.
A robot comprises a mobile base, a robotic arm operatively coupled to the mobile base, and at least one interface configured to enable selective coupling to at least one accessory. The at least one interface comprises an electrical interface configured to transmit power and/or data between the robot and the at least one accessory, and a mechanical interface configured to enable physical coupling between the robot and the at least one accessory.
A robot includes a mobile base, a turntable rotatably coupled to the mobile base, a robotic arm operatively coupled to the turntable, and at least one directional sensor. An orientation of the at least one directional sensor is independently controllable. A method of controlling a robotic arm includes controlling a state of a mobile base and controlling a state of a robotic arm coupled to the mobile base, based, at least in part, on the state of the mobile base.
B25J 15/06 - Gripping heads with vacuum or magnetic holding means
B25J 9/04 - Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian co-ordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical co-ordinate type or polar co-ordinate type
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
24.
SAFETY SYSTEMS AND METHODS FOR AN INTEGRATED MOBILE MANIPULATOR ROBOT
A robot comprises a mobile base, a robotic arm operatively coupled to the mobile base, a plurality of distance sensors, at least one antenna configured to receive one or more signals from a monitoring system external to the robot, and a computer processor. The computer processor is configured to limit one or more operations of the robot when it is determined that the one or more signals are not received by the at least one antenna.
A perception mast for mobile robot is provided. The mobile robot comprises a mobile base, a turntable operatively coupled to the mobile base, the turntable configured to rotate about a first axis, an arm operatively coupled to a first location on the turntable, and the perception mast operatively coupled to a second location on the turntable, the perception mast configured to rotate about a second axis parallel to the first axis, wherein the perception mast includes disposed thereon, a first perception module and a second perception module arranged between the first imaging module and the turntable.
A method (300) includes receiving, while a robot (100) traverses a building environment (10), sensor data (134) captured by sensors (132, 132a-n) of the robot. The method includes receiving a building information model (BIM) (30) for the environment that includes semantic information (32) identifying permanent objects (PO) within the environment. The method includes generating localization candidates (212, 212a-n) for a localization map (202) of the environment. Each localization candidate (212) corresponds to a feature of the environment identified by the sensor data and represents a potential localization reference point (222). The localization map is configured to localize the robot within the environment. For each localization candidate, the method includes determining whether the respective feature corresponding to the respective localization candidate is a permanent object (PO) in the environment and generating the respective localization candidate as a localization reference point (222) in the localization map for the robot.
A method (400) includes receiving sensor data (134) for an environment (10) about the robot (100). The sensor data is captured by one or more sensors (132, 132a-n) of the robot. The method includes detecting one or more objects (212, 214) in the environment using the received sensor data. For each detected object, the method includes authoring an interaction behavior (222) indicating a behavior (222) that the robot is capable of performing with respect to the corresponding detected object. The method also includes augmenting a localization map (182) of the environment to reflect the respective interaction behavior of each detected object.
322322) defining a position of the end effector. The operations include determining, using the measured task parameter set, at least one axis of freedom and at least one constrained axis for the end effector. The operations include assigning a first impedance value (238) to the end effector along the at least one axis of freedom and a second impedance value (238) to the end effector along the at least one constrained axis. The operations include instructing the articulated arm to move the end effector along the at least one axis of freedom.
G05B 19/42 - Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
A method (500) includes obtaining, from an operator (12) of a robot (100), a return execution lease (210R) associated with one or more commands (174) that is scheduled within a sequence of execution leases (210). The robot is configured to execute commands associated with a current execution lease (210) that is an earliest execution lease (210) in the sequence of execution leases that is not expired. The method includes obtaining an execution lease expiration trigger (134T) triggering expiration of the current execution lease. After obtaining the trigger, the method includes determining that the return execution lease is a next current execution lease (210) in the sequence. While the return execution lease is the current execution lease, the method includes executing the one or more commands associated with the return execution lease which causes the robot to navigate to a return location (410).
A method (500) for a robot (100) includes receiving a three-dimensional point cloud of sensor data (134) for a space within an environment (10) about the robot. The method includes receiving a selection input indicating a user-selection of a target object represented in an image (300) corresponding to the space. The target object is for grasping by an end-effector (150) of a robotic manipulator (126). The method includes generating a grasp region (216) for the end-effector of the robotic manipulator by projecting a plurality of rays (218) from the selected target object of the image onto the three-dimensional point cloud of sensor data. The method includes determining a grasp geometry (212) for the robotic manipulator to grasp the target object within the grasp region. The method includes instructing the end-effector of the robotic manipulator to grasp the target object within the grasp region based on the grasp geometry.
A computer-implemented method (300), when executed by data processing hardware (102, 202) of a robot (10) having an articulated arm (30) and a base (12), causes data processing hardware to perform operations. The operations include determining a first location (Lu) of a workspace (4) of the articulated arm associated with a current base configuration of the base of the robot. The operations also include receiving a task request (62) defining a task (6a, 6b) for the robot to perform outside of the workspace of the articulated arm at the first location. The operations also include generating base parameters (152) associated with the task request. The operations further include instructing, using the generated base parameters, the base of the robot to move from the current base configuration to an anticipatory base configuration.
A method (500), executed by data processing hardware (142) of a robot (100), includes receiving sensor data (134) for a space within an environment (10) about the robot. The method includes receiving, from a user interface (UI) (300), a user input indicating a user-selection of a location within a representation (312) of the space. The location corresponds to a position of a target object (302) within the space. The method includes receiving, from the UI, a plurality of grasping inputs (304) designating an orientation and translation for an end-effector (128H, 150) of a robotic manipulator (126) to grasp the target object. The method includes generating a three-dimensional (3D) location of the target object based on the received sensor data and the location corresponding to the user input. The method includes instructing the end-effector to grasp the target object using the generated 3D location and the plurality of grasping inputs.
ee) for the docking station based on an initial pose (P) of the legged robot relative to the docking station. The operations include identifying one or more docking station features (22) from the received sensor data. The operations include matching the one or more identified docking station features to one or more known docking station features. The operations include adjusting the estimated pose for the docking station to a corrected pose for the docking station based on an orientation of the one or more identified docking station features that match the one or more known docking station features.
B60L 53/00 - Methods of charging batteries, specially adapted for electric vehicles; Charging stations or on-board charging equipment therefor; Exchange of energy storage elements in electric vehicles
34.
DOOR OPENING BEHAVIOR AND ROBOT PRESENTING THE BEHAVIOUR
Data processing hardware (142) of a robot (100) performs operations to identify a door (20) within an environment (10). A robotic manipulator (126) of the robot grasps a feature (26) of the door on a first side facing the robot. When the door opens in a first direction toward the robot, the robotic manipulator exerts a pull force to swing the door in the first direction, a leg (120) of the robot moves to a position that blocks the door from swinging in a second direction, the robotic manipulator contacts the door on a second side opposite the first side, and the robotic manipulator exerts a door opening force on the second side as the robot traverses the doorway. When the door opens in the second direction away from the robot, the robotic manipulator exerts the door opening force on the first side as the robot traverses the doorway.
A method (400) includes generating a joint-torque-limit model (232) for an articulated arm (20) of a robot (10) based on allowable joint torque sets (234) corresponding to a current configuration (P12) of a base (12) of the robot. The method also includes receiving a first requested joint torque set (324) for a first arm pose (P20) of the articulated arm and determining, using the joint-torque-limit model, an optimized joint torque set (244) corresponding to the first requested joint torque set. The method also includes receiving a second requested joint torque set (324) for a second arm pose (P20) of the articulated arm and generating an adjusted joint torque set (332) by adjusting the second requested joint torque set based on the optimized joint torque set. The method also includes sending the adjusted joint torque set to the articulated arm.
A gripper mechanism (200) includes a pair of gripper jaws, a linear actuator (300), and a rocker bogey (318). The linear actuator (300) drives a first gripper jaw (210) to move relative to a second gripper jaw (220). Here, the linear actuator includes a screw shaft (312) and a drive nut (314) where the drive nut includes a protrusion (316p) having protrusion axis (Ap) expending along a length of the protrusion. The protrusion axis is perpendicular to an actuation axis (AL) of the linear actuator (300) along a length of the screw shaft. The rocker bogey is coupled to the drive nut at the protrusion to form a pivot point for the rocker bogey and to enable the rocker bogey to pivot about the protrusion axis when the linear actuator drives the first gripper jaw to move relative to the second gripper jaw.
TLL) of the output link relative to the input link. The wire routing traverses the inline twist joint to couple the input link and the output link. The wire routing includes an input link section (210), an output link section (230), and an omega section (220). A first position of the wire routing coaxially aligns at a start of the omega section on the input link with a second position of the wire routing at an end of the omega section on an output link.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
A method of identifying stairs (20) includes receiving a plurality of footfall locations (128) of a robot (100). Each respective footfall location indicates a location where a leg (120) of the robot contacted a support surface (12). The method also includes determining a plurality of candidate footfall location pairs (212) where the candidate footfall location pair includes a first and a second candidate footfall location. The method further includes clustering the first candidate footfall location into a first cluster group (222) based on a height of the first candidate footfall location and clustering the second candidate footfall location into a second cluster group based on a height of the second candidate footfall location. The method additionally includes generating a stair model (202) by representing each of the cluster groups as a corresponding stair and delineating each stair based on a respective midpoint (MP) between each adjacent cluster group.
B62D 57/024 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members specially adapted for moving on inclined or vertical surfaces
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method (310) of localizing includes receiving odometry information (192) plotting locations (202) and sensor data (134) of an environment (10). The method includes obtaining a series of odometry information members (315), each including a respective odometry measurement at a respective time (dt). The method also includes obtaining a series of sensor data members (313), each including a respective sensor measurement at the respective time. The method also includes, for each sensor data member of the series of sensor data members, (i) determining a localization (321) at the respective time based on the respective sensor data, and (ii) determining an offset (323) of the localization relative to the odometry measurement at the respective time. The method also includes determining whether a variance (s2offset) of the offsets determined for the localizations exceeds a threshold variance (s2threshoid). When the variance among the offsets exceeds the threshold variance, a signal (204) is generated.
G05D 1/02 - Control of position or course in two dimensions
G01C 22/00 - Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers or using pedometers
A method of stair tracking for modeled and perceived terrain includes receiving sensor data (134) about an environment (10) of a robot (100). The method also includes generating a set of maps (182) based on voxels corresponding to the received sensor data. The set of maps includes a ground height map and a map of movement limitations for the robot. The map of movement limitations identifies illegal regions within the environment that the robot should avoid entering. The method further includes generating a stair model (202) for a set of stairs (20) within the environment based on the sensor data, merging the stair model and the map of movement limitations to generate an enhanced stair map, and controlling the robot based on the enhanced stair map or the ground height map to traverse the environment.
B62D 57/00 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track
A method for constraining robot autonomy language includes receiving a navigation command (44) to navigate a robot (10) to a mission destination (46) within an environment (8) of the robot and generating a route specification (200) for navigating the robot from a current location (43) in the environment to the mission destination in the environment. The route specification includes a series of route segments (210). Each route segment in the series of route segments includes a goal region (220) for the corresponding route segment and a constraint region (230) encompassing the goal region. The constraint region establishes boundaries for the robot to remain within while traversing toward the goal region. The route segment also includes an initial path (310) for the robot to follow while traversing the corresponding route segment.
A method (800) includes receiving sensor data (17) of an environment (7, 800) and generating a plurality of waypoints (510) and a plurality of edges (520) each connecting a pair of the waypoints. The method includes receiving a target destination (46) to navigate to and determining a route specification (200) based on the waypoints and corresponding edges to follow for navigating to the target destination selected from waypoints and edges previously generated. For each waypoint, the method includes generating a goal region (220) encompassing the corresponding waypoint and generating at least one constraint region (230) encompassing a goal region. The at least one constraint region establishes boundaries to remain within while traversing toward the target destination. The method includes navigating to the target destination by traversing through each goal region while maintaining within the at least one constraint region.
terrainterrainfootfootfoot) of the first foot relative to the ground surface, and adjusting the second coefficient of friction of the first foot based on the measured velocity of the foot. One of the plurality of feet of the robot applies a force on the ground surface based on the adjusted second coefficient of friction.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method (400) for perception and fitting for a stair tracker (200) receives sensor data (134) for a robot (100) adjacent to a staircase (20). For each stair of the staircase, the method detects, at a first time step (ti), an edge (26) of a respective stair based on the sensor data (134). The method also determines whether the detected edge (212) is a most likely step edge candidate (222) by comparing the detected edge from the first time step to an alternative detected edge (224) at a second time step, the second time step occurring after the first time step. When the detected edge is the most likely step edge candidate, the method further defines a height of the respective stair based on sensor data height about the detected edge. The method also generates a staircase model (202) including stairs with respective edges at the respective defined heights.
A method (800) for online authoring of robot autonomy applications includes receiving sensor data (17) of an environment (8) about a robot (10) while the robot traverses the environment. The method also includes generating an environmental map (114) representative of the environment about the robot based on the received sensor data. While generating the environmental map, the method includes localizing a current position of the robot within the environmental map and, at each target location (52) of one or more target locations within the environment, recording a respective action (420) for the robot to perform. The method also includes generating a behavior tree (122) for navigating the robot to each target location and controlling the robot to perform the respective action at each target location within the environment during a future mission when the current position of the robot within the environmental map reaches the target location.
LP1P1) where the first proximal attachment location is offset from the actuation axis. The drive system also includes an output link (202) rotatably coupled to the distal end of the linkage system where the output link is offset from the actuation axis.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
B25J 9/12 - Programme-controlled manipulators characterised by positioning means for manipulator elements electric
B25J 9/10 - Programme-controlled manipulators characterised by positioning means for manipulator elements
A method for generating a joint command (302) includes receiving a maneuver script (202) including a plurality of maneuvers (210) for a legged robot (100) to perform where each maneuver is associated with a cost (336). The method further includes identifying that two or more maneuvers of the plurality of maneuvers of the maneuver script occur at the same time instance. The method also includes determining a combined maneuver for the legged robot to perform at the time instance based on the two or more maneuvers and the costs associated with the two or more maneuvers. The method additionally includes generating a joint command to control motion of the legged robot at the time instance where the joint command commands a set of joints (J) of the legged robot. Here, the set of joints correspond to the combined maneuver.
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
B25J 11/00 - Manipulators not otherwise provided for
A dynamic planning controller (200) receives a maneuver (210) and a current state (202) and transforms the maneuver and the current state into a nonlinear optimization problem (222). The nonlinear optimization problem is configured to optimize an unknown force and an unknown position vector. At a first time instance (Ii, to), the controller linearizes the nonlinear optimization problem into a first linear optimization problem and determines a first solution (232) to the first linear optimization problem using quadratic programming. At a second time instance (h, ti), the controller linearizes the nonlinear optimization problem into a second linear optimization problem based on the first solution and determines a second solution to the second linear optimization problem based on the first solution using the quadratic programming. The controller also generates a joint command (204) to control motion of the robot during the maneuver based on the second solution.
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method (1700) of planning a swing trajectory (132) for a leg (12) of a robot (10) includes receiving an initial position (50) of a leg of the robot, an initial velocity (52) of the leg, a touchdown location (62), and a touchdown target time (64). The method includes determining a difference between the initial position and the touchdown location and separating the difference into a horizontal motion component and a vertical motion component. The method also includes selecting a horizontal motion policy (210) and a vertical motion policy (610) to satisfy the motion components. Each policy produces a respective trajectory as a function of the initial position, the initial velocity, the touchdown location, and the touchdown target time. The method also includes executing the selected policies to swing the leg of the robot from the initial position to the touchdown location at the touchdown target time.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
A method of constrained mobility mapping includes receiving from at least one sensor (132) of a robot (100) at least one original set of data (134) and a current set of data from an environment (10) about the robot (100). The method further includes generating a voxel map (210) including a plurality of voxels (212) based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map (218) based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.
A method for generating intermediate waypoints (310) for a navigation system (100) of a robot (10) includes receiving a navigation route (112). The navigation route includes a series of high-level waypoints (210) that begin at a starting location (113) and end at a destination location (114), and is based on high-level navigation data (50) representative of locations of static obstacles in an area the robot is to navigate. The method also includes receiving image data (17) of an environment (8) about the robot from an image sensor (31) and generating at least one intermediate waypoint based on the image data. The method also includes adding the at least one intermediate waypoint to the series of high-level waypoints of the navigation route and navigating the robot from the starting location along the series of high-level waypoints and the at least one intermediate waypoint toward the destination location.
G01C 21/20 - Instruments for performing navigational calculations
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
G05D 1/02 - Control of position or course in two dimensions
JDswsw) of the robot (100) where the swing leg performs a swing phase of a gait of the robot. The method also includes receiving odometry (192) defining an estimation of a pose of the robot and determining whether an unexpected torque on the swing leg corresponds to an impact (202) on the swing leg. When the unexpected torque corresponds to the impact, the method further includes determining whether the impact is indicative of a touchdown of the swing leg on a ground surface (12) based on the odometry and the joint dynamics. When the impact is not indicative of the touchdown of the swing leg, the method includes classifying a cause of the impact based on the odometry of the robot and the joint dynamics of the swing leg.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method (1700) for controlling a robot (300) includes receiving image data (342) from at least one image sensor (344). The image data corresponds to an environment (301) about the robot. The method also includes executing a graphical user interface (221) configured to display a scene (222) of the environment based on the image data and receive an input indication indicating selection of a pixel location (224) within the scene. The method also includes determining a pointing vector based on the selection of the pixel location. The pointing vector represents a direction of travel for navigating the robot in the environment. The method also includes transmitting a waypoint command to the robot. The waypoint command when received by the robot causes the robot to navigate to a target location. The target location is based on an intersection between the pointing vector and a terrain estimate of the robot.
A method (1000) for calibrating a position measurement system (200) includes receiving measurement data (224, 230) from the position measurement system and determining that the measurement data includes periodic distortion data (232). The position measurement system includes a nonius track (212b, 212d) and a master track (212a, 212c). The method also includes modifying the measurement data by decomposing the periodic distortion data into periodic components and removing the periodic components from the measurement data.
G01D 5/244 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means generating pulses or pulse trains
G01D 5/245 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means generating pulses or pulse trains using a variable number of pulses in a train
The present disclosure provides: at least one component of a rotary valve subassembly (800); a rotary valve assembly (900) including the rotary valve subassembly; a hydraulic circuit (1000) including the rotary valve assembly; an assembly including a robot (100) that incorporates the hydraulic circuit; and a method of operating the rotary valve assembly. The at least one component of the rotary valve subassembly includes a spool (802). The at least one component of the rotary valve subassembly includes a sleeve (804).
A method (500) for negotiating stairs (20) includes receiving image data (164) about a robot (100) maneuvering in an environment (10) with stairs. Here, the robot includes two or more legs (104). Prior to the robot traversing the stairs, for each stair, the method further includes determining a corresponding step region (220) based on the received image data. The step region identifies a safe placement area on a corresponding stair for a distal end (106) of a corresponding swing leg of the robot. Also prior to the robot traversing the stairs, the method includes shifting a weight distribution of the robot towards a front portion of the robot. The method further includes, for each stair, moving the distal end of the corresponding swing leg of the robot to a target step location where the target step location is within the corresponding step region of the stair.
G05D 1/02 - Control of position or course in two dimensions
G05D 1/08 - Control of attitude, i.e. control of roll, pitch, or yaw
B62D 57/00 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track
122) to the target box location that satisfies a threshold second alignment distance (224, 224b), and releasing the box held by the robot. The release of the box causes the box to pivot toward a boundary edge (24) of the target box location.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
B65G 57/08 - Stacking of articles by adding to the top of the stack articles being tilted or inverted prior to depositing
B65G 61/00 - Use of pick-up or transfer devices or of manipulators for stacking or de-stacking articles not otherwise provided for
B65G 57/20 - Stacking of articles of particular shape three-dimensional, e.g. cubiform, cylindrical
AA) to perform the task. The method includes receiving movement constraints (240) for the robot and manipulation inputs (230) configured to manipulate the arm to perform the task. For each joint, the method generates a corresponding joint torque (tj) having an angular momentum where the joint torque satisfies the movement constraints based on the manipulation inputs, the wheel torque, and the wheel axle force. The method further includes controlling the robot to perform the task using the joint torques.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
ab1-nababb). For each image frame pair, the method includes determining corners (214) for a rectangle associated with the at least one target box within the respective monocular image frame. Based on the determined comers, the method includes performing edge detection and determining faces (224) within the respective monocular image frame and extracting planes (226) corresponding to the at least one target box from the respective depth image frame. The method includes matching the determined faces to the extracted planes and generating a box estimation (222) based on the determined corners, the performed edge detection, and the matched faces.
A robot (100) includes a drive system configured to maneuver the robot about an environment (10) and data processing hardware (112) in communication with memory hardware (114) and the drive system. The memory hardware stores instructions that when executed on the data processing hardware cause the data processing hardware to perform operations. The operations include receiving image data (124) of the robot maneuvering in the environment and executing at least one waypoint heuristic (212). The at least one waypoint heuristic is configured to trigger a waypoint placement on a waypoint map (200). In response to triggering the waypoint placement, the operations include recording a waypoint (210) on the waypoint map where the waypoint is associated with at least one waypoint edge (220) and includes sensor data obtained by the robot. The at least one waypoint edge includes a pose transform expressing how to move between two waypoints.
A method for terrain and constraint planning a step plan includes receiving, at data processing hardware (36) of a robot (10), image data (17) of an environment (8) about the robot from at least one image sensor (31). The robot includes a body (11) and legs (12). The method also includes generating, by the data processing hardware, a body-obstacle map (112), a ground height map (116), and a step-obstacle map (114) based on the image data and generating, by the data processing hardware, a body path (510) for movement of the body of the robot while maneuvering in the environment based on the body-obstacle map. The method also includes generating, by the data processing hardware, a step path (350) for the legs of the robot while maneuvering in the environment based on the body path, the body-obstacle map, the ground height map, and the step-obstacle map.
A method of manipulating boxes (22) includes receiving a minimum box size for a plurality of boxes varying in size located in a walled container (30). The method also includes dividing a grip area of a gripper (200) into a plurality of zones (Z). The method further includes locating a set of candidate boxes (24, 24T) based on an image from a visual sensor (120). For each zone, the method additionally includes, determining an overlap of a respective zone with one or more neighboring boxes (26) to the set of candidate boxes. The method also includes determining a grasp pose (PG) for a target candidate box (24T) that avoids one or more walls (30w) of the walled container. The method further includes executing the grasp pose to lift the target candidate box by the gripper where the gripper activates each zone of the plurality of zones that does not overlap a respective neighboring box (26) to the target candidate box.
RSitSit) by moving the counter-balance body relative to the inverted pendulum body away from the ground surface to position a center of mass of the robot substantially over the drive wheel.
B25J 5/00 - Manipulators mounted on wheels or on carriages
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
A method (1400) of maneuvering a robot (100) includes driving the robot across a surface (12) and turning the robot by shifting a center of mass of the robot toward a turning direction, thereby leaning the robot into the turning direction. The robot includes an inverted pendulum body (200), a counter-balance body (300) disposed on the inverted pendulum body and configured to move relative to the inverted pendulum body, at least one leg (400) prismatically coupled to the inverted pendulum body, and a drive wheel (500) rotatably coupled to the at least one leg. The inverted pendulum body has first and second end portions (210, 220) and defines a forward drive direction. The method also includes turning the robot by at least one of moving the counter-balance body relative to the inverted pendulum body or altering a height of the at least one leg with respect to the surface.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
A robot (100) includes an inverted pendulum body (200) having first and second end portions (210, 220), a counter-balance body (300) disposed on the inverted pendulum body and configured to move relative to the inverted pendulum body, at least one leg (400) having first and second ends (410, 420), and a drive wheel (500) rotatably coupled to the second end of the at least one leg. The first end of the at least one leg is prismatically coupled to the second end portion of the inverted pendulum body.
B25J 5/00 - Manipulators mounted on wheels or on carriages
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
A method (1400) of operating a robot (100) includes driving the robot to approach a reach point, extending a manipulator arm (600) forward of the reach point, and maintaining a drive wheel (500) and a center of mass of the robot rearward of the reach point by moving a counter-balance body (300) relative to an inverted pendulum body (200) while extending the manipulator arm forward of the reach point. The robot includes the inverted pendulum body, the counter-balance body deposed on the inverted pendulum body, the manipulator arm connected to the inverted pendulum body, at least one leg (400) having a first end (410) prismatically coupled to the inverted pendulum body, and the drive wheel rotatably coupled to a second end (420) of the at least one leg.
B25J 5/00 - Manipulators mounted on wheels or on carriages
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
A robot system (400) includes: an upper body section (408b) including one or more end-effectors (422, 424); a lower body section (408a) including one or more legs (404, 406); and an intermediate body section (408c) coupling the upper and lower body sections. An upper body control system (417b) operates at least one of the end-effectors. The intermediate body section experiences a first intermediate body linear force and/or moment based on an end-effector force acting on the at least one end-effector. A lower body control system operates (417a) the one or more legs. The legs experience respective surface reaction forces. The intermediate body section experiences a second intermediate body linear force and/or moment based on the surface reaction forces. The one or more legs operate so that the second intermediate body linear force balances the first intermediate linear force and the second intermediate body moment balances the first intermediate body moment.
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
B62D 57/024 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members specially adapted for moving on inclined or vertical surfaces
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
68.
MOTOR AND CONTROLLER INTEGRATION FOR A LEGGED ROBOT
An example robot (300) includes: a motor (1502) disposed within a housing (1504) at a joint (403, 403, 424, 800) configured to control motion of a member (1524) of a robot; a controller (1508) including one or more printed circuit boards (PCBs) (1510, 1512, 1534) disposed within the housing and including a plurality of field-effect transistors (FETs) (1540) disposed on a surface of a PCB of the one or more PCBs facing the motor; a rotary position sensor (1520) mounted on the controller; a shaft (1514) coupled to a rotor of the motor and extending therefrom to the controller; and a magnet (1518) mounted within the shaft at an end of the shaft facing the controller.
H02K 11/33 - Drive circuits, e.g. power electronics
H02K 11/215 - Magnetic effect devices, e.g. Hall-effect or magneto-resistive elements
H02K 11/24 - Devices for sensing torque, or actuated thereby
H02K 11/25 - Devices for sensing temperature, or actuated thereby
H02K 9/22 - Arrangements for cooling or ventilating by solid heat conducting material embedded in, or arranged in contact with, the stator or rotor, e.g. heat bridges
H02K 7/00 - Arrangements for handling mechanical energy structurally associated with dynamo-electric machines, e.g. structural association with mechanical driving motors or auxiliary dynamo-electric machines
69.
TRANSMISSION WITH INTEGRATED OVERLOAD PROTECTION FOR A LEGGED ROBOT
An example robot (300) includes: a motor (1300) disposed at a joint (403) configured to control motion of a member of the robot; a transmission (1200) including an input member (1312) coupled to and configured to rotate with the motor, an intermediate member (1314), and an output member (1318), where the intermediate member is fixed such that as the input member rotates, the output member rotates therewith at a different speed; a pad (1320) frictionally coupled to a side surface of the output member of the transmission and coupled to the member of the robot; and a spring (1328) configured to apply an axial preload on the pad, wherein the axial preload defines a torque limit that, when exceeded by a torque load on the member of the robot, the output member of the transmission slips relative to the pad.
An example robot (300) includes: a leg (304, 306) having an upper leg member (410) and a lower leg member (412) coupled to the upper leg member at a knee joint (404); a screw actuator (400) disposed within the upper leg member, where the screw actuator has a screw shaft (406) and a nut (408) mounted coaxial to the screw shaft such that the screw shaft is rotatable within the nut; a motor (402) mounted at an upper portion of the upper leg member and coupled to the screw shaft; a carrier (414) coupled and mounted coaxial to the nut such that the nut is disposed at a proximal end of the carrier; and a linkage (418, 422) coupled to the carrier, where the linkage is coupled to the lower leg member at the knee joint.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid