A computing system may provide a model of a robot. The model may be configured to determine simulated motions of the robot based on sets of control parameters. The computing system may also operate the model with multiple sets of control parameters to simulate respective motions of the robot. The computing system may further determine respective scores for each respective simulated motion of the robot, wherein the respective scores are based on constraints associated with each limb of the robot and a goal. The constraints include actuator constraints and joint constraints for limbs of the robot. Additionally, the computing system may select, based on the respective scores, a set of control parameters associated with a particular score. Further, the computing system may modify a behavior of the robot based on the selected set of control parameters to perform a coordinated exertion of forces by actuators of the robot.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A robot leg assembly including a hip joint and an upper leg member. A proximal end portion of the upper leg member rotatably coupled to the hip joint. The robot leg assembly including a knee joint rotatably coupled to a distal end portion of the upper leg member, a lower leg member rotatably coupled to the knee joint, a linear actuator disposed on the upper leg member and defining a motion axis, and a motor coupled to the linear actuator and a linkage coupled to the translation stage and to the lower leg member. The linear actuator includes a translation stage moveable along the motion axis to translate rotational motion of the motor to linear motion of the translation stage along the motion axis, which moves the linkage to rotate the lower leg member relative to the upper leg member at the knee joint.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
An example implementation involves controlling robots with non-constant body pitch and height. The implementation involves obtaining a model of the robot that represents the robot as a first point mass rigidly coupled with a second point mass along a longitudinal axis. The implementation also involves determining a state of a first pair of legs, and determining a height of the first point mass based on the model and the state of the first pair of legs. The implementation further involves determining a first amount of vertical force for at least one leg of the first pair of legs to apply along a vertical axis against a surface while the at least one leg is in contact with the surface. Additionally, the implementation involves causing the at least one leg of the first pair of legs to begin applying the amount of vertical force against the surface.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A kit includes a computing device configured to control motion of equipment for receiving one or more parcels in an environment of a mobile robot. The kit also includes a structure configured to couple to the equipment. The structure comprises an identifier configured to be sensed by a sensor of the mobile robot.
A computing device receives location information for a mobile robot. The computing device also receives location information for an entity in an environment of the mobile robot. The computing device determines a distance between the mobile robot and the entity in the environment of the mobile robot. The computing device determines one or more operating parameters for the mobile robot. The one or more operating parameters are based on the determined distance.
G05D 1/02 - Control of position or course in two dimensions
B25J 5/00 - Manipulators mounted on wheels or on carriages
B66F 9/06 - Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks
7.
BRAKING AND REGENERATION CONTROL IN A LEGGED ROBOT
An example robot includes a hydraulic actuator cylinder controlling motion of a member of the robot. The hydraulic actuator cylinder comprises a piston, a first chamber, and a second chamber. A valve system controls hydraulic fluid flow between a hydraulic supply line of pressurized hydraulic fluid, the first and second chambers, and a return line. A controller may provide a first signal to the valve system so as to begin moving the piston based on a trajectory comprising moving in a forward direction, stopping, and moving in a reverse direction. The controller may provide a second signal to the valve system so as to cause the piston to override the trajectory as it moves in the forward direction and stop at a given position, and then provide a third signal to the valve system so as to resume moving the piston in the reverse direction based on the trajectory.
F15B 9/09 - Servomotors with follow-up action, i.e. in which the position of the actuated member conforms with that of the controlling member with servomotors of the reciprocatable or oscillatable type controlled by valves affecting the fluid feed or the fluid outlet of the servomotor with electrical control means
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
B25J 5/00 - Manipulators mounted on wheels or on carriages
Systems and methods for determining movement of a robot about an environment are provided. A computing system of the robot (i) receives information including a navigation target for the robot and a kinematic state of the robot; (ii) determines, based on the information and a trajectory target for the robot, a retargeted trajectory for the robot; (iii) determines, based on the retargeted trajectory, a centroidal trajectory for the robot and a kinematic trajectory for the robot consistent with the centroidal trajectory; and (iv) determines, based on the centroidal trajectory and the kinematic trajectory, a set of vectors having a vector for each of one or more joints of the robot.
B25J 13/08 - Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
G05D 1/02 - Control of position or course in two dimensions
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
A method for detecting boxes includes receiving a plurality of image frame pairs for an area of interest including at least one target box. Each image frame pair includes a monocular image frame and a respective depth image frame. For each image frame pair, the method includes determining corners for a rectangle associated with the at least one target box within the respective monocular image frame. Based on the determined corners, the method includes the following: performing edge detection and determining faces within the respective monocular image frame; and extracting planes corresponding to the at least one target box from the respective depth image frame. The method includes matching the determined faces to the extracted planes and generating a box estimation based on the determined corners, the performed edge detection, and the matched faces of the at least one target box.
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
An actuation pressure to actuate one or more hydraulic actuators may be determined based on a load on the one or more hydraulic actuators of a robotic device. Based on the determined actuation pressure, a pressure rail from among a set of pressure rails at respective pressures may be selected. One or more valves may connect the selected pressure rail to a metering valve. The hydraulic drive system may operate in a discrete mode in which the metering valve opens such that hydraulic fluid flows from the selected pressure rail through the metering valve to the one or more hydraulic actuators at approximately the supply pressure. Responsive to a control state of the robotic device, the hydraulic drive system may operate in a continuous mode in which the metering valve throttles the hydraulic fluid such that the supply pressure is reduced to the determined actuation pressure.
F15B 11/18 - Servomotor systems without provision for follow-up action with two or more servomotors used in combination for obtaining stepwise operation of a single controlled member
G05D 1/02 - Control of position or course in two dimensions
A method for calibrating a position measurement system includes receiving measurement data from the position measurement system and determining that the measurement data includes periodic distortion data. The position measurement system includes a nonius track and a master track. The method also includes modifying the measurement data by decomposing the periodic distortion data into periodic components and removing the periodic components from the measurement data.
G01D 5/244 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means generating pulses or pulse trains
G01D 5/347 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using optical means, i.e. using infrared, visible or ultraviolet light with attenuation or whole or partial obturation of beams of light the beams of light being detected by photocells using displacement encoding scales
A method of manipulating boxes includes receiving a minimum box size for a plurality of boxes varying in size located in a walled container. The method also includes dividing a grip area of a gripper into a plurality of zones. The method further includes locating a set of candidate boxes based on an image from a visual sensor. For each zone, the method additionally includes, determining an overlap of a respective zone with one or more neighboring boxes to the set of candidate boxes. The method also includes determining a grasp pose for a target candidate box that avoids one or more walls of the walled container. The method further includes executing the grasp pose to lift the target candidate box by the gripper where the gripper activates each zone of the plurality of zones that does not overlap a respective neighboring box to the target candidate box.
Methods and apparatus for online camera calibration are provided. The method comprises receiving a first image captured by a first camera of a robot, wherein the first image includes an object having at least one known dimension, receiving a second image captured by a second camera of the robot, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap, projecting a plurality of points on the object in the first image to pixel locations in the second image, and determining, based on pixel locations of the plurality of points on the object in second image and the projected plurality of points on the object, a reprojection error.
Methods and apparatus for performing automated inspection of one or more assets in an environment using a mobile robot are provided. The method, comprises defining, within an image captured by a sensor of a robot, a region of interest that includes an asset in an environment of the robot, wherein the asset is associated with an asset identifier, configuring at least one parameter of a computer vision model based on the asset identifier, processing image data within the region of interest using the computer vision model to determine whether an alert should be generated, and outputting the alert when it is determined that the alert should be generated.
One disclosed method involves at least one application controlling navigation of a robot through an environment based at least in part on a topological map, the topological map including at least a first waypoint, a second waypoint, and a first edge representing a first path between the first waypoint and the second waypoint. The at least one application determines that the topological map includes at least one feature that identifies a first service that is configured to control the robot to perform at least one operation, and instructs the first service to perform the at least one operation as the robot travels along at least a portion of the first path.
A method for online authoring of robot autonomy applications includes receiving sensor data of an environment about a robot while the robot traverses through the environment. The method also includes generating an environmental map representative of the environment about the robot based on the received sensor data. While generating the environmental map, the method includes localizing a current position of the robot within the environmental map and, at each corresponding target location of one or more target locations within the environment, recording a respective action for the robot to perform. The method also includes generating a behavior tree for navigating the robot to each corresponding target location and controlling the robot to perform the respective action at each corresponding target location within the environment during a future mission when the current position of the robot within the environmental map reaches the corresponding target location.
According to one disclosed method, one or more sensors of a robot may receive data corresponding to one or more locations of the robot along a path the robot is following within an environment on a first occasion. Based on the received data, a determination may be made that one or more stairs exist in a first region of the environment. Further, when the robot is at a position along the path the robot is following on the first occasion, a determination may be made that the robot is expected to enter the first region. The robot may be controlled to operate in a first operational mode associated with traversal of stairs when it is determined that one or more stairs exist in the first region and the robot is expected to enter the first region.
B62D 57/024 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members specially adapted for moving on inclined or vertical surfaces
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
Methods and apparatus for navigating a robot along a route through an environment, the route being associated with a mission, are provided. The method comprises identifying, based on sensor data received by one or more sensors of the robot, a set of potential obstacles in the environment, determining, based at least in part on stored data indicating a set of footfall locations of the robot during a previous execution of the mission, that at least one of the potential obstacles in the set is an obstacle, and navigating the robot to avoid stepping on the obstacle.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
An example implementation involves receiving measurements from an inertial sensor coupled to the robot and detecting an occurrence of a foot of the legged robot making contact with a surface. The implementation also involves reducing a gain value of an amplifier from a nominal value to a reduced value upon detecting the occurrence. The amplifier receives the measurements from the inertial sensor and provides a modulated output based on the gain value. The implementation further involves increasing the gain value from the reduced value to the nominal value over a predetermined duration of time after detecting the occurrence. The gain value is increased according to a profile indicative of a manner in which to increase the gain value of the predetermined duration of time. The implementation also involves controlling at least one actuator of the legged robot based on the modulated output during the predetermined duration of time.
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
G01L 5/00 - Apparatus for, or methods of, measuring force, work, mechanical power, or torque, specially adapted for specific purposes
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method for generating intermediate waypoints for a navigation system of a robot includes receiving a navigation route. The navigation route includes a series of high-level waypoints that begin at a starting location and end at a destination location and is based on high-level navigation data. The high-level navigation data is representative of locations of static obstacles in an area the robot is to navigate. The method also includes receiving image data of an environment about the robot from an image sensor and generating at least one intermediate waypoint based on the image data. The method also includes adding the at least one intermediate waypoint to the series of high-level waypoints of the navigation route and navigating the robot from the starting location along the series of high-level waypoints and the at least one intermediate waypoint toward the destination location.
A drive system includes a linear actuator with a drive shaft and having an actuation axis extending along a length of the linear actuator. A motor assembly of the drive system couples to drive shaft and is configured to rotate the drive shaft about the actuation axis of the linear actuator. The drive system further includes a nut attached to the drive shaft and a carrier housing the nut. A linkage system of the drive system extends from a proximal end away from the motor assembly to a distal end. The proximal end of the linkage system rotatably attaches to the carrier at a first proximal attachment location where the first proximal attachment location offset is from the actuation axis. The drive system also includes an output link rotatably coupled to the distal end of the linkage system where the output link is offset from the actuation axis.
A robot includes a drive system configured to maneuver the robot about an environment and data processing hardware in communication with memory hardware and the drive system. The memory hardware stores instructions that when executed on the data processing hardware cause the data processing hardware to perform operations. The operations include receiving image data of the robot maneuvering in the environment and executing at least one waypoint heuristic. The at least one waypoint heuristic is configured to trigger a waypoint placement on a waypoint map. In response to the at least one waypoint heuristic triggering the waypoint placement, the operations include recording a waypoint on the waypoint map where the waypoint is associated with at least one waypoint edge and includes sensor data obtained by the robot. The at least one waypoint edge includes a pose transform expressing how to move between two waypoints.
A robotic device includes a control system. The control system receives a first measurement indicative of a first distance between a center of mass of the machine and a first position in which a first leg of the machine last made initial contact with a surface. The control system also receives a second measurement indicative of a second distance between the center of mass of the machine and a second position in which the first leg of the machine was last raised from the surface. The control system further determines a third position in which to place a second leg of the machine based on the received first measurement and the received second measurement. Additionally, the control system provides instructions to move the second leg of the machine to the determined third position.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method for palletizing by a robot includes positioning an object at an initial position adjacent to a target object location, tilting the object at an angle relative to a ground plane, shifting the object in a first direction from the initial position toward a first alignment position, shifting the object in a second direction from the first alignment position toward a second alignment position, and releasing the object from the robot to pivot the object toward the target object location.
B25J 15/06 - Gripping heads with vacuum or magnetic holding means
B65G 57/24 - Stacking of articles of particular shape three-dimensional, e.g. cubiform, cylindrical in layers, each of predetermined arrangement the layers being transferred as a whole, e.g. on pallets
An example implementation includes (i) receiving sensor data that indicates topographical features of an environment in which a robotic device is operating, (ii) processing the sensor data into a topographical map that includes a two-dimensional matrix of discrete cells, the discrete cells indicating sample heights of respective portions of the environment, (iii) determining, for a first foot of the robotic device, a first step path extending from a first lift-off location to a first touch-down location, (iv) identifying, within the topographical map, a first scan patch of cells that encompass the first step path, (v) determining a first high point among the first scan patch of cells; and (vi) during the first step, directing the robotic device to lift the first foot to a first swing height that is higher than the determined first high point.
G05D 1/02 - Control of position or course in two dimensions
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method includes receiving sensor data of an environment about a robot and generating a plurality of waypoints and a plurality of edges each connecting a pair of the waypoints. The method includes receiving a target destination for the robot to navigate to and determining a route specification based on waypoints and corresponding edges for the robot to follow for navigating the robot to the target destination selected from waypoints and edges previously generated. For each waypoint, the method includes generating a goal region encompassing the corresponding waypoint and generating at least one constraint region encompassing a goal region. The at least one constraint region establishes boundaries for the robot to remain within while traversing toward the target destination. The method includes navigating the robot to the target destination by traversing the robot through each goal region while maintaining the robot within the at least one constraint region.
An example implementation for determining mechanically-timed footsteps may involve a robot having a first foot in contact with a ground surface and a second foot not in contact with the ground surface. The robot may determine a position of its center of mass and center of mass velocity, and based on these, determine a capture point for the robot. The robot may also determine a threshold position for the capture point, where the threshold position is based on a target trajectory for the capture point after the second foot contacts the ground surface. The robot may determine that the capture point has reached this threshold position and based on this determination, and cause the second foot to contact the ground surface.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
G05D 1/08 - Control of attitude, i.e. control of roll, pitch, or yaw
A robot includes an input link, an output link, and a wire routing. The output link is coupled to the input link at an inline twist joint where the output link is configured to rotate about the longitudinal axis of the output link relative to the input link. The wire routing traverses the inline twist joint to couple the input link and the output link. The wire routing includes an input link section, an output link section, and an omega section. A first position of the wire routing coaxially aligns at a start of the omega section on the input link with a second position of the wire routing at an end of the omega section on an output link.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
A gripper mechanism includes a pair of gripper jaws, a linear actuator, and a rocker bogey. The linear actuator drives a first gripper jaw to move relative to a second gripper jaw. Here, the linear actuator includes a screw shaft and a drive nut where the drive nut includes a protrusion having protrusion axis expending along a length of the protrusion. The protrusion axis is perpendicular to an actuation axis of the linear actuator along a length of the screw shaft. The rocker bogey is coupled to the drive nut at the protrusion to form a pivot point for the rocker bogey and to enable the rocker bogey to pivot about the protrusion axis when the linear actuator drives the first gripper jaw to move relative to the second gripper jaw.
A robot system includes: an upper body section including one or more end-effectors; a lower body section including one or more legs; and an intermediate body section coupling the upper and lower body sections. An upper body control system operates at least one of the end-effectors. The intermediate body section experiences a first intermediate body linear force and/or moment based on an end-effector force acting on the at least one end-effector. A lower body control system operates the one or more legs. The one or more legs experience respective surface reaction forces. The intermediate body section experiences a second intermediate body linear force and/or moment based on the surface reaction forces. The lower body control system operates the one or more legs so that the second intermediate body linear force balances the first intermediate linear force and the second intermediate body moment balances the first intermediate body moment.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
B62D 57/024 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members specially adapted for moving on inclined or vertical surfaces
Aspects of the present disclosure provide techniques to undo a portion of a mission recording of a robot by physically moving the robot back through the mission recording in reverse. As a result, after the undo process is completed, the robot is positioned at an earlier point in the mission and the user can continue to record further mission data from that point. The portion of the mission recording that was performed in reverse can be omitted from subsequent performance of the mission, for example by deleting that portion from the mission recording or otherwise marking that portion as inactive. In this manner, the mistake in the initial mission recording is not retained, but the robot need not perform the entire mission recording again.
A clutch assembly includes a first member for mechanically coupling to an output shaft. A first material is frictionally coupled to a first side surface of the first member. A second material is frictionally coupled to a second side surface of the first member. A compliant member is configured to apply an axial force on at least one of the first material and the second material. A radial spring least partially surrounds an exterior surface of the first member.
The disclosure provides a method for generating a joint command. The method includes receiving a maneuver script including a plurality of maneuvers for a legged robot to perform where each maneuver is associated with a cost. The method further includes identifying that two or more maneuvers of the plurality of maneuvers of the maneuver script occur at the same time instance. The method also includes determining a combined maneuver for the legged robot to perform at the time instance based on the two or more maneuvers and the costs associated with the two or more maneuvers. The method additionally includes generating a joint command to control motion of the legged robot at the time instance where the joint command commands a set of joints of the legged robot. Here, the set of joints correspond to the combined maneuver.
A method of controlling a robot includes: receiving, by a computing device, from one or more sensors, sensor data reflecting an environment of the robot, the one or more sensors configured to have a field of view that spans at least 150 degrees with respect to a ground plane of the robot; providing, by the computing device, video output to an extended reality (XR) display usable by an operator of the robot, the video output reflecting the environment of the robot; receiving, by the computing device, movement information reflecting movement by the operator of the robot; and controlling, by the computing device, the robot to move based on the movement information.
The present disclosure provides: at least one component of a rotary valve subassembly; a rotary valve assembly including the rotary valve subassembly; a hydraulic circuit including the rotary valve assembly; an assembly including a robot that incorporates the hydraulic circuit; and a method of operating the rotary valve assembly. The at least one component of the rotary valve subassembly includes a spool. The at least one component of the rotary valve subassembly includes a sleeve.
F16K 11/076 - Multiple-way valves, e.g. mixing valves; Pipe fittings incorporating such valves; Arrangement of valves and flow lines specially adapted for mixing fluid with all movable sealing faces moving as one unit comprising only sliding valves with pivoted closure members with sealing faces shaped as surfaces of solids of revolution
G05D 1/02 - Control of position or course in two dimensions
A method of localizing a robot includes receiving odometry information plotting locations of the robot and sensor data of the environment about the robot. The method also includes obtaining a series of odometry information members, each including a respective odometry measurement at a respective time. The method also includes obtaining a series of sensor data members, each including a respective sensor measurement at the respective time. The method also includes, for each sensor data member of the series of sensor data members, (i) determining a localization of the robot at the respective time based on the respective sensor data, and (ii) determining an offset of the localization relative to the odometry measurement at the respective time. The method also includes determining whether a variance of the offsets determined for the localizations exceeds a threshold variance. When the variance among the offsets exceeds the threshold variance, a signal is generated.
A device for a robot includes a structure having a locking mechanism. The locking mechanism has an engaged configuration and a disengaged configuration. The device also includes a receiving surface mechanically coupled to the locking mechanism. The receiving surface is configured to interact with a member of the robot to move the locking mechanism between the engaged configuration and the disengaged configuration.
An example method may include i) detecting a disturbance to a gait of a robot, where the gait includes a swing state and a step down state, the swing state including a target swing trajectory for a foot of the robot, and where the target swing trajectory includes a beginning and an end; and ii) based on the detected disturbance, causing the foot of the robot to enter the step down state before the foot reaches the end of the target swing trajectory.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
43.
GLOBAL ARM PATH PLANNING WITH ROADMAPS AND PRECOMPUTED DOMAINS
A method of planning a path for an articulated arm of robot includes generating a directed graph corresponding to a joint space of the articulated arm. The directed graph includes a plurality of nodes each corresponding to a joint pose of the articulated arm. The method also includes generating a planned path from a start node associated with a start pose of the articulated arm to an end node associated with a target pose of the articulated arm. The planned path includes a series of movements along the nodes between the start node and the end node. The method also includes determining when the articulated arm can travel to a subsequent node or the target pose, terminating a movement of the articulated arm towards a target node, and initiating a subsequent movement of the articulated arm to move directly to the target pose or the subsequent node.
B25J 9/04 - Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian co-ordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical co-ordinate type or polar co-ordinate type
44.
SYSTEMS AND METHODS OF COORDINATED BODY MOTION OF ROBOTIC DEVICES
Techniques are described that determine motion of a robot's body that will maintain an end effector within a useable workspace when the end effector moves according to a predicted future trajectory. The techniques may include determining or otherwise obtaining the predicted future trajectory of the end effector and utilizing the predicted future trajectory to determine any motion of the body that is necessary to maintain the end effector within the useable workspace. In cases where no such motion of the body is necessary because the predicted future trajectory indicates the end effector will stay within the useable workspace without motion of the body, the body may remain stationary, thereby avoiding the drawbacks caused by unnecessary motion described above. Otherwise, the body of the robot can be moved while the end effector moves to ensure that the end effector stays within the useable workspace.
Method and apparatus for object detection by a robot are provided. The method comprises analyzing using a set of trained detection models, one or more first images of an environment of the robot to detect one or more objects in the environment of the robot, generating at least one fine-tuned model by training one or more of the trained detection models in the set, wherein the training is based on a second image of the environment of the robot and annotations associated with the second image, wherein the annotations identify one or more objects in the second image, updating the set of trained detection models to include the generated at least one fine-tuned model, and analyzing using the updated set of trained detection models, one or more third images of the environment of the robot to detect one or more objects in the environment.
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
46.
SYSTEMS AND METHODS OF LIGHTING FOR A MOBILE ROBOT
Methods and apparatus for controlling lighting of a mobile robot are provided. A mobile robot includes a drive system configured to enable the mobile robot to be driven, a navigation module configured to provide control instructions to the drive system, a plurality of lighting modules, wherein each of the plurality of lighting modules includes a plurality of individually-controllable light sources, and a controller configured to control an operation of the plurality of individually-controllable light sources based, at least in part, on navigation information received from the navigation module.
A virtual bumper configured to protect a component of a robotic device from damage is provided. The virtual bumper comprises a plurality of distance sensors arranged on the robotic device and at least one computing device configured to receive distance measurement signals from the plurality of distance sensors, detect, based on the received distance measurement signals, at least one object in a motion path of the component, and control the robot to change one or more operations of the robot to avoid a collision between the component and the at least one object.
Some robotic arms may include vacuum-based grippers. Detecting the seal quality between each vacuum assembly of the gripper and a grasped object may enable reactivation of some vacuum assemblies, thereby improving the grasp. One embodiment of a method may include activating each of a plurality of vacuum assemblies of a robotic gripper by supplying a vacuum to each vacuum assembly, determining, for each of the activated vacuum assemblies, a first respective seal quality of the vacuum assembly with a first grasped object, deactivating one or more of the activated vacuum assemblies based, at least in part, on the first respective seal qualities, and reactivating each of the deactivated vacuum assemblies within a reactivation interval.
Disclosed herein are systems and methods directed to an industrial robot that can perform mobile manipulation (e.g., dexterous mobile manipulation). A robotic arm may be capable of precise control when reaching into tight spaces, may be robust to impacts and collisions, and/or may limit the mass of the robotic arm to reduce the load on the battery and increase runtime. A robotic arm may include differently configured proximal joints and/or distal joints. Proximal joints may be designed to promote modularity and may include separate functional units, such as modular actuators, encoder, bearings, and/or clutches. Distal joints may be designed to promote integration and may include offset actuators to enable a through-bore for the internal routing of vacuum, power, and signal connections.
Consistent connection strategies for coupling accessories to a robot can help achieve certain objectives, e.g., to tolerate and correct misalignment during coupling of the accessory. In some embodiments, the connection strategy may enable certain accessories to connect to certain sides of a robot. When connected, an accessory may be rigid in yaw, lateral motion, and fore/aft motion, while remaining unconstrained in roll and pitch as well as vertical motion. A sensor may enable detection of the accessory, and a mechanical fuse may release the accessory when a force threshold is exceeded. A mechanical coupler of an accessory may include two connectors, each of which includes a receiving area configured to receive a pin on the robot and a latch configured to retain the pin within the receiving area. The pins (and the receiving areas) may be differently sized, and may be differently arranged.
Methods and apparatus for object detection and pick order determination for a robotic device are provided. Information about a plurality of two-dimensional (2D) object faces of the objects in the environment may be processed to determine whether each of the plurality of 2D object faces matches a prototype object of a set of prototype objects stored in a memory, wherein each of the prototype objects in the set represents a three-dimensional (3D) object. A model of 3D objects in the environment of the robotic device is generated using one or more of the prototype objects in the set of prototype objects that was determined to match one or more of the 2D object faces.
Methods and apparatuses for detecting one or more objects (e.g., dropped objects) by a robotic device are described. The method comprises receiving a distance-based point cloud including a plurality of points in three dimensions, filtering the distance-based point cloud to remove points from the plurality of points based on at least one known surface in an environment of the robotic device to produce a filtered distance-based point cloud, clustering points in the filtered distance-based point cloud to produce a set of point clusters, and detecting one or more objects based, at least in part, on the set of point clusters.
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
Methods and apparatus for determining a grasp strategy to grasp an object with a gripper of a robotic device are described. The method comprises generating a set of grasp candidates to grasp a target object, wherein each of the grasp candidates includes information about a gripper placement relative to the target object, determining, for each of the grasp candidates in the set, a grasp quality, wherein the grasp quality is determined using a physical-interaction model including one or more forces between the target object and the gripper located at the gripper placement for the respective grasp candidate, selecting, based at least in part on the determined grasp qualities, one of the grasp candidates, and controlling the robotic device to attempt to grasp the target object using the selected grasp candidate.
An apparatus for decoupling angular adjustments about perpendicular axes is described herein. The apparatus comprises a first plate, a second plate offset from the first plate in a first direction, a first pivot disposed between the first and second plates, a second pivot disposed between the first and second plates. The second pivot is offset from the first pivot in a second direction perpendicular to the first direction. The third pivot is disposed between the first and second plates. The third pivot is offset from the first pivot in a third direction perpendicular to both the first and second directions. The apparatus further includes a first wedge at least partially disposed between the second pivot and the second plate. The first wedge is configured to adjust a first angle between the first and second plates, the first angle being about a first axis extending along the third direction.
A method of mitigating slip conditions and estimating ground friction for a robot having a plurality of feet includes receiving a first coefficient of friction corresponding to a ground surface. The method also includes determining whether one of the plurality of feet is in contact with the ground surface, and when a first foot of the plurality feet is in contact with the ground surface, setting a second coefficient of friction associated with the first foot equal to the first coefficient of friction. The method also includes determining a measured velocity of the first foot relative to the ground surface, and adjusting the second coefficient of friction of the first foot based on the measured velocity of the foot. One of the plurality of feet of the robot applies a force on the ground surface based on the adjusted second coefficient of friction.
An electronic circuit comprises a charge storing component, a set of one or more switching components coupled to the charge storing component, and an additional switching component coupled to each of the one or more switching components in the set. The additional switching component is configured to operate in a first state or a second state based on a received current or voltage. The first state prevents current to flow from the charge storing component to each of the one or more switching components in the set and the second state allows current to flow from the charge storing component to each of the one or more switching components in the set.
H03K 17/56 - Electronic switching or gating, i.e. not by contact-making and -breaking characterised by the use of specified components by the use, as active elements, of semiconductor devices
H02P 3/22 - Arrangements for stopping or slowing electric motors, generators, or dynamo-electric converters for stopping or slowing an individual dynamo-electric motor or dynamo-electric converter for stopping or slowing an ac motor by short-circuit or resistive braking
A method for perception and fitting for a stair tracker includes receiving sensor data for a robot adjacent to a staircase. For each stair of the staircase, the method includes detecting, at a first time step, an edge of a respective stair of the staircase based on the sensor data. The method also includes determining whether the detected edge is a most likely step edge candidate by comparing the detected edge from the first time step to an alternative detected edge at a second time step, the second time step occurring after the first time step. When the detected edge is the most likely step edge candidate, the method includes defining, by the data processing hardware, a height of the respective stair based on sensor data height about the detected edge. The method also includes generating a staircase model including stairs with respective edges at the respective defined heights.
G05D 1/10 - Simultaneous control of position or course in three dimensions
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06F 18/243 - Classification techniques relating to the number of classes
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
58.
NONLINEAR TRAJECTORY OPTIMIZATION FOR ROBOTIC DEVICES
Systems and methods for determining movement of a robot are provided. A computing system of the robot receives information including an initial state of the robot and a goal state of the robot. The computing system determines, using nonlinear optimization, a candidate trajectory for the robot to move from the initial state to the goal state. The computing system determines whether the candidate trajectory is feasible. If the candidate trajectory is feasible, the computing system provides the candidate trajectory to a motion control module of the robot. If the candidate trajectory is not feasible, the computing system determines, using nonlinear optimization, a different candidate trajectory for the robot to move from the initial state to the goal state, the nonlinear optimization using one or more changed parameters.
A method for detecting boxes includes receiving a plurality of image frame pairs for an area of interest including at least one target box. Each image frame pair includes a monocular image frame and a respective depth image frame. For each image frame pair, the method includes determining corners for a rectangle associated with the at least one target box within the respective monocular image frame. Based on the determined corners, the method includes the following: performing edge detection and determining faces within the respective monocular image frame; and extracting planes corresponding to the at least one target box from the respective depth image frame. The method includes matching the determined faces to the extracted planes and generating a box estimation based on the determined corners, the performed edge detection, and the matched faces of the at least one target box.
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
A computer-implemented method executed by data processing hardware of a robot causes the data processing hardware to receive sensor data associated with a door. The data processing hardware determines, using the sensor data, door properties of the door. The door properties can include a door width, a grasp search ray, a grasp type, a swing direction, or a door handedness. The data processing hardware generates a door movement operation based on the door properties. The data processing hardware can execute the door movement operation to move the door. The door movement operation can include pushing the door, pulling the door, hooking a frame of the door, or blocking the door. The data processing hardware can utilize the door movement operation to enable a robot to traverse a door without human intervention.
G05D 1/02 - Control of position or course in two dimensions
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A method of planning a swing trajectory for a leg of a robot includes receiving an initial position of a leg of the robot, an initial velocity of the leg, a touchdown location, and a touchdown target time. The method also includes determining a difference between the initial position and the touchdown location and separating the difference between the initial position and the touchdown location into a horizontal motion component and a vertical motion component. The method also includes selecting a horizontal motion policy and a vertical motion policy to satisfy the motion components. Each policy produces a respective trajectory as a function of the initial position, the initial velocity, the touchdown location, and the touchdown target time. The method also includes executing the selected policies to swing the leg of the robot from the initial position to the touchdown location at the touchdown target time.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
The present disclosure provides a brace system including an upper portion and a lower portion. The brace system may also include a first pulley rotatably coupling the upper portion to a first intermediate link positioned between the upper portion and the lower portion. The brace system may also include a second pulley rotatably coupling the first intermediate link to a second intermediate link positioned between the upper portion and the lower portion. The brace system may also include a third pulley rotatably coupling the second intermediate link to the lower portion. Further, the brace system may include at least one tension-bearing element substantially encircling each of the first pulley, the second pulley, and the third pulley.
A61H 3/00 - Appliances for aiding patients or disabled persons to walk about
A61F 5/01 - Orthopaedic devices, e.g. long-term immobilising or pressure directing devices for treating broken or deformed bones such as splints, casts or braces
A method for negotiating stairs includes receiving image data about a robot maneuvering in an environment with stairs. Here, the robot includes two or more legs. Prior to the robot traversing the stairs, for each stair, the method further includes determining a corresponding step region based on the received image data. The step region identifies a safe placement area on a corresponding stair for a distal end of a corresponding swing leg of the robot. Also prior to the robot traversing the stairs, the method includes shifting a weight distribution of the robot towards a front portion of the robot. When the robot traverses the stairs, the method further includes, for each stair, moving the distal end of the corresponding swing leg of the robot to a target step location where the target step location is within the corresponding step region of the stair.
B62D 57/024 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members specially adapted for moving on inclined or vertical surfaces
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A control system may receive a first plurality of measurements indicative of respective joint angles corresponding to a plurality of sensors connected to a robot. The robot may include a body and a plurality of jointed limbs connected to the body associated with respective properties. The control system may also receive a body orientation measurement indicative of an orientation of the body of the robot. The control system may further determine a relationship between the first plurality of measurements and the body orientation measurement based on the properties associated with the jointed limbs of the robot. Additionally, the control system may estimate an aggregate orientation of the robot based on the first plurality of measurements, the body orientation measurement, and the determined relationship. Further, the control system may provide instructions to control at least one jointed limb of the robot based on the estimated aggregate orientation of the robot.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
65.
Transmission with integrated overload protection for a legged robot
An example robot includes: a motor disposed at a joint configured to control motion of a member of the robot; a transmission including an input member coupled to and configured to rotate with the motor, an intermediate member, and an output member, where the intermediate member is fixed such that as the input member rotates, the output member rotates therewith at a different speed; a pad frictionally coupled to a side surface of the output member of the transmission and coupled to the member of the robot; and a spring configured to apply an axial preload on the pad, wherein the axial preload defines a torque limit that, when exceeded by a torque load on the member of the robot, the output member of the transmission slips relative to the pad.
Systems and methods for determining movement of a robot about an environment are provided. A computing system of the robot (i) receives information including a navigation target for the robot and a kinematic state of the robot; (ii) determines, based on the information and a trajectory target for the robot, a retargeted trajectory for the robot; (iii) determines, based on the retargeted trajectory, a centroidal trajectory for the robot and a kinematic trajectory for the robot consistent with the centroidal trajectory; and (iv) determines, based on the centroidal trajectory and the kinematic trajectory, a set of vectors having a vector for each of one or more joints of the robot.
G05D 1/02 - Control of position or course in two dimensions
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
A dynamic planning controller receives a maneuver for a robot and a current state of the robot and transforms the maneuver and the current state of the robot into a nonlinear optimization problem. The nonlinear optimization problem is configured to optimize an unknown force and an unknown position vector. At a first time instance, the controller linearizes the nonlinear optimization problem into a first linear optimization problem and determines a first solution to the first linear optimization problem using quadratic programming. At a second time instance, the controller linearizes the nonlinear optimization problem into a second linear optimization problem based on the first solution at the first time instance and determines a second solution to the second linear optimization problem based on the first solution using the quadratic programming. The controller also generates a joint command to control motion of the robot during the maneuver based on the second solution.
G05B 13/04 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
G06N 5/00 - Computing arrangements using knowledge-based models
68.
Topology Processing for Waypoint-based Navigation Maps
The operations of a computer-implemented method include obtaining a topological map of an environment including a series of waypoints and a series of edges. Each edge topologically connects a corresponding pair of adjacent waypoints. The edges represent traversable routes for a robot. The operations include determining, using the topological map and sensor data captured by the robot, one or more candidate alternate edges. Each candidate alternate edge potentially connects a corresponding pair of waypoints that are not connected by one of the edges. For each respective candidate alternate edge, the operations include determining, using the sensor data, whether the robot can traverse the respective candidate alternate edge without colliding with an obstacle and, when the robot can traverse the respective candidate alternate edge, confirming the respective candidate alternate edge as a respective alternate edge. The operations include updating, using nonlinear optimization and the confirmed alternate edges, the topological map.
G05D 1/02 - Control of position or course in two dimensions
G06K 9/62 - Methods or arrangements for recognition using electronic means
G01C 22/00 - Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers or using pedometers
69.
Alternate Route Finding for Waypoint-based Navigation Maps
A computer-implemented method executed by data processing hardware of a robot causes the data processing hardware to perform operations including obtaining a topological map including waypoints and edges. Each edge connects adjacent waypoints. The waypoints and edges represent a navigation route for the robot to follow. Operations include determining, that an edge that connects first and second waypoints is blocked by an obstacle. Operations include generating, using image data and the topological map, one or more alternate waypoints offset from one of the waypoints. For each alternate waypoint, operations include generating an alternate edge connecting the alternate waypoint to a waypoint. Operations include adjusting the navigation route to include at least one alternate waypoint and alternate edge that bypass the obstacle. Operations include navigating the robot from the first waypoint to an alternate waypoint along the alternate edge connecting the alternate waypoint to the first waypoint.
A computer-implemented method executed by data processing hardware of a robot causes the data processing hardware to perform operations. The operations include receiving a sensor pointing command that commands the robot to use a sensor to capture sensor data of a location in an environment of the robot. The sensor is disposed on the robot. The operations include determining, based on an orientation of the sensor relative to the location, a direction for pointing the sensor toward the location, and an alignment pose of the robot to cause the sensor to point in the direction toward the location. The operations include commanding the robot to move from a current pose to the alignment pose. After the robot moves to the alignment pose and the sensor is pointing in the direction toward the location, the operations include commanding the sensor to capture the sensor data of the location in the environment.
A computer-implemented method when executed by data processing hardware causes the data processing hardware to perform operations. The operations include receiving a navigation route for a mobile robot. The navigation route includes a sequence of waypoints connected by edges. Each edge corresponds to movement instructions that navigate the mobile robot between waypoints of the sequence of waypoints. While the mobile robot is traveling along the navigation route, the operations include determining that the mobile robot is unable to execute a respective movement instruction for a respective edge of the navigation route due to an obstacle obstructing the respective edge, generating an alternative path to navigate the mobile robot to an untraveled waypoint in the sequence of waypoints, and resuming travel by the mobile robot along the navigation route. The alternative path avoids the obstacle.
A computer-implemented method when executed by data processing hardware causes the data processing hardware to perform operations. The operations include detecting a candidate support surface at an elevation less than a current surface supporting a legged robot. A determination is made on whether the candidate support surface includes an area of missing terrain data within a portion of an environment surrounding the legged robot, where the area is large enough to receive a touchdown placement for a leg of the legged robot. If missing terrain data is determined, at least a portion of the area of missing terrain data is classified as a no-step region of the candidate support surface. The no-step region indicates a region where the legged robot should avoid touching down a leg of the legged robot.
G05D 1/02 - Control of position or course in two dimensions
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method of constrained mobility mapping includes receiving from at least one sensor of a robot at least one original set of sensor data and a current set of sensor data. Here, each of the at least one original set of sensor data and the current set of sensor data corresponds to an environment about the robot. The method further includes generating a voxel map including a plurality of voxels based on the at least one original set of sensor data. The plurality of voxels includes at least one ground voxel and at least one obstacle voxel. The method also includes generating a spherical depth map based on the current set of sensor data and determining that a change has occurred to an obstacle represented by the voxel map based on a comparison between the voxel map and the spherical depth map. The method additional includes updating the voxel map to reflect the change to the obstacle.
B25J 13/08 - Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
Systems and methods related to intelligent grippers with individual cup control are disclosed. One aspect of the disclosure provides a method of determining grip quality between a robotic gripper and an object. The method comprises applying a vacuum to two or more cup assemblies of the robotic gripper in contact with the object, moving the object with the robotic gripper after applying the vacuum to the two or more cup assemblies, and determining, using at least one pressure sensor associated with each of the two or more cup assemblies, a grip quality between the robotic gripper and the object.
A method of robotic stepping includes determining a first step location error between a reference step location of a reference step path and a first potential step location of a first potential step path for a first leg of a robot, determining a first capture point error between a reference capture point location of the reference step path and a first potential capture point location of the first potential step path, determining a first score for the first potential step path based on the first step location error and the first capture point error, selecting the first potential step path based on comparing the first score for the first potential step path to a second score of a second potential step path, and instructing a movement of the first leg of the robot based on the first potential step path.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method for palletizing by a robot includes positioning an object at an initial position adjacent to a target object location, tilting the object at an angle relative to a ground plane, shifting the object in a first direction from the initial position toward a first alignment position, shifting the object in a second direction from the first alignment position toward a second alignment position, and releasing the object from the robot to pivot the object toward the target object location.
B25J 15/06 - Gripping heads with vacuum or magnetic holding means
B65G 57/24 - Stacking of articles of particular shape three-dimensional, e.g. cubiform, cylindrical in layers, each of predetermined arrangement the layers being transferred as a whole, e.g. on pallets
77.
Global arm path planning with roadmaps and precomputed domains
A method of planning a path for an articulated arm of robot includes generating a directed graph corresponding to a joint space of the articulated arm. The directed graph includes a plurality of nodes each corresponding to a joint pose of the articulated arm. The method also includes generating a planned path from a start node associated with a start pose of the articulated arm to an end node associated with a target pose of the articulated arm. The planned path includes a series of movements along the nodes between the start node and the end node. The method also includes determining when the articulated arm can travel to a subsequent node or the target pose, terminating a movement of the articulated arm towards a target node, and initiating a subsequent movement of the articulated arm to move directly to the target pose or the subsequent node.
B25J 9/04 - Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian co-ordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical co-ordinate type or polar co-ordinate type
A method of footstep contact detection includes receiving joint dynamics data for a swing phase of a swing leg of the robot, receiving odometry data indicative of a pose of the robot, determining whether an impact on the swing leg is indicative of a touchdown of the swing leg based on the joint dynamics data and an amount of completion of the swing phase, and determining when the impact on the swing leg is not indicative of the touchdown of the swing leg, a cause of the impact based on the joint dynamics data and the odometry data.
B25J 13/08 - Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
An imaging apparatus includes a structural support rigidly coupled to a surface of a mobile robot and a plurality of perception modules, each of which is arranged on the structural support, has a different field of view, and includes a two-dimensional (2D) camera configured to capture a color image of an environment, a depth sensor configured to capture depth information of one or more objects in the environment, and at least one light source configured to provide illumination to the environment. The imaging apparatus further includes control circuitry configured to control a timing of operation of the 2D camera, the depth sensor, and the at least one light source included in each of the plurality of perception modules, and at least one computer processor configured to process the color image and the depth information to identify at least one characteristic of one or more objects in the environment.
A perception mast for mobile robot is provided. The mobile robot comprises a mobile base, a turntable operatively coupled to the mobile base, the turntable configured to rotate about a first axis, an arm operatively coupled to a first location on the turntable, and the perception mast operatively coupled to a second location on the turntable, the perception mast configured to rotate about a second axis parallel to the first axis, wherein the perception mast includes disposed thereon, a first perception module and a second perception module arranged between the first imaging module and the turntable.
A method of estimating one or more mass characteristics of a payload manipulated by a robot includes moving the payload using the robot, determining one or more accelerations of the payload while the payload is in motion, sensing, using one or more sensors of the robot, a wrench applied to the payload while the payload is in motion, and estimating the one or more mass characteristics of the payload based, at least in part, on the determined accelerations and the sensed wrench.
A robot comprises a mobile base, a robotic arm operatively coupled to the mobile base, and at least one interface configured to enable selective coupling to at least one accessory. The at least one interface comprises an electrical interface configured to transmit power and/or data between the robot and the at least one accessory, and a mechanical interface configured to enable physical coupling between the robot and the at least one accessory.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
B25J 5/00 - Manipulators mounted on wheels or on carriages
A robot includes a mobile base, a turntable rotatably coupled to the mobile base, a robotic arm operatively coupled to the turntable, and at least one directional sensor. An orientation of the at least one directional sensor is independently controllable. A method of controlling a robotic arm includes controlling a state of a mobile base and controlling a state of a robotic arm coupled to the mobile base, based, at least in part, on the state of the mobile base.
A robot comprises a mobile base, a robotic arm operatively coupled to the mobile base, a plurality of distance sensors, at least one antenna configured to receive one or more signals from a monitoring system external to the robot, and a computer processor. The computer processor is configured to limit one or more operations of the robot when it is determined that the one or more signals are not received by the at least one antenna.
A method tor controlling a robot includes receiving image data from at least one image sensor. The image data corresponds to an environment about the robot. The method also includes executing a graphical user interface configured to display a scene of the environment based on the image data and receive an input indication indicating selection of a pixel location within the scene. The method also includes determining a pointing vector based on the selection of the pixel location. The pointing vector represents a direction of travel for navigating the robot in the environment. The method also includes transmitting a waypoint command to the robot. The waypoint command when received by the robot causes the robot to navigate to a target location. The target location is based on an intersection between the pointing vector and a terrain estimate of the robot.
A method for calibrating a position measurement system includes receiving measurement data from the position measurement system and determining that the measurement data includes periodic distortion data. The position measurement system includes a nonius track and a master track. The method also includes modifying the measurement data by decomposing the periodic distortion data into periodic components and removing the periodic components from the measurement data.
G01D 5/244 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means generating pulses or pulse trains
G01D 5/347 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using optical means, i.e. using infrared, visible or ultraviolet light with attenuation or whole or partial obturation of beams of light the beams of light being detected by photocells using displacement encoding scales
87.
Semantic Models for Robot Autonomy on Dynamic Sites
A method includes receiving, while a robot traverses a building environment, sensor data captured by one or more sensors of the robot. The method includes receiving a building information model (BIM) for the environment that includes semantic information identifying one or more permanent objects within the environment. The method includes generating a plurality of localization candidates for a localization map of the environment. Each localization candidate corresponds to a feature of the environment identified by the sensor data and represents a potential localization reference point. The localization map is configured to localize the robot within the environment when the robot moves throughout the environment. For each localization candidate, the method includes determining whether the respective feature corresponding to the respective localization candidate is a permanent object in the environment and generating the respective localization candidate as a localization reference point in the localization map for the robot.
G01C 21/00 - Navigation; Navigational instruments not provided for in groups
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A method includes receiving sensor data for an environment about the robot. The sensor data is captured by one or more sensors of the robot. The method includes detecting one or more objects in the environment using the received sensor data. For each detected object, the method includes authoring an interaction behavior indicating a behavior that the robot is capable of performing with respect to the corresponding detected object. The method also includes augmenting a localization map of the environment to reflect the respective interaction behavior of each detected object.
A computer-implemented method when executed by data processing hardware of a legged robot causes the data processing hardware to perform operations including receiving sensor data corresponding to an area including at least a portion of a docking station. The operations include determining an estimated pose for the docking station based on an initial pose of the legged robot relative to the docking station. The operations include identifying one or more docking station features from the received sensor data. The operations include matching the one or more identified docking station features to one or more known docking station features. The operations include adjusting the estimated pose for the docking station to a corrected pose for the docking station based on an orientation of the one or more identified docking station features that match the one or more known docking station features.
B60L 53/36 - Means for automatic or assisted adjustment of the relative position of charging devices and vehicles by positioning the vehicle
G05D 1/02 - Control of position or course in two dimensions
B62D 57/02 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
B60L 53/16 - Connectors, e.g. plugs or sockets, specially adapted for charging electric vehicles
Data processing hardware of a robot performs operations to identify a door within an environment. A robotic manipulator of the robot grasps a feature of the door on a first side facing the robot. When the door opens in a first direction toward the robot, the robotic manipulator exerts a pull force to swing the door in the first direction, a leg of the robot moves to a position that blocks the door from swinging in the second direction, the robotic manipulator contacts the door on a second side opposite the first side, and the robotic manipulator exerts a door opening force on the second side as the robot traverses a doorway corresponding to the door. When the door opens in a second direction away from the robot, the robotic manipulator exerts the door opening force on the first side as the robot traverses the doorway.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
A computer-implemented method, executed by data processing hardware of a robot, includes receiving sensor data for a space within an environment about the robot. The method includes receiving, from a user interface (UI) in communication with the data processing hardware, a user input indicating a user-selection of a location within a two-dimensional (2D) representation of the space. The location corresponds to a position of a target object within the space. The method includes receiving, from the UI, a plurality of grasping inputs designating an orientation and a translation for an end-effector of a robotic manipulator to grasp the target object. The method includes generating a three-dimensional (3D) location of the target object based on the received sensor data and the location corresponding to the user input. The method includes instructing the end-effector to grasp the target object using the generated 3D location and the plurality of grasping inputs.
A computer-implemented method includes generating a joint-torque-limit model for the articulated arm based on allowable joint torque sets corresponding to a base pose of the base. The method also include receiving a first requested joint torque set for a first arm pose of the articulated arm and determining, using the joint-torque-limit model, an optimized joint torque set corresponding to the first requested joint torque set. The method also includes receiving a second requested joint torque set for a second arm pose of the articulated arm and generating an adjusted joint torque set by adjusting the second requested joint torque set based on the optimized joint torque set. The method also includes sending the adjusted joint torque set to the articulated arm.
A computer-implemented method, when executed by data processing hardware of a robot having an articulated arm and a base, causes data processing hardware to perform operations. The operations include determining a first location of a workspace of the articulated arm associated with a current base configuration of the base of the robot. The operations also include receiving a task request defining a task for the robot to perform outside of the workspace of the articulated arm at the first location. The operations also include generating base parameters associated with the task request. The operations further include instructing, using the generated base parameters, the base of the robot to move from the current base configuration to an anticipatory base configuration.
A computer-implemented method executed by data processing hardware of a robot causes the data processing hardware to perform operations. The robot includes an articulated arm having an end effector engaged with a constrained object. The operations include receiving a measured task parameter set for the end effector. The measured task parameter set includes position parameters defining a position of the end effector. The operations further include determining, using the measured task parameter set, at least one axis of freedom and at least one constrained axis for the end effector within a workspace. The operations also include assigning a first impedance value to the end effector along the at least one axis of freedom and assigning a second impedance value to the end effector along the at least one constrained axis. The operations include instructing the articulated arm to move the end effector along the at least one axis of freedom.
B25J 13/08 - Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
G05B 19/4155 - Numerical control (NC), i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by programme execution, i.e. part programme or machine function execution, e.g. selection of a programme
A computer-implemented method, executed by data processing hardware of a robot, includes receiving a three-dimensional point cloud of sensor data for a space within an environment about the robot. The method includes receiving a selection input indicating a user-selection of a target object represented in an image corresponding to the space. The target object is for grasping by an end-effector of a robotic manipulator of the robot. The method includes generating a grasp region for the end-effector of the robotic manipulator by projecting a plurality of rays from the selected target object of the image onto the three-dimensional point cloud of sensor data. The method includes determining a grasp geometry for the robotic manipulator to grasp the target object within the grasp region. The method includes instructing the end-effector of the robotic manipulator to grasp the target object within the grasp region based on the grasp geometry.
A method includes obtaining, from an operator of a robot, a return execution lease associated with one or more commands for controlling the robot that is scheduled within a sequence of execution leases. The robot is configured to execute commands associated with a current execution lease that is an earliest execution lease in the sequence of execution leases that is not expired. The method includes obtaining an execution lease expiration trigger triggering expiration of the current execution lease. After obtaining the trigger, the method includes determining that the return execution lease is a next current execution lease in the sequence. While the return execution lease is the current execution lease, the method includes executing the one or more commands for controlling the robot associated with the return execution lease which cause the robot to navigate to a return location remote from a current location of the robot.
A method for terrain and constraint planning a step plan includes receiving, at data processing hardware of a robot, image data of an environment about the robot from at least one image sensor. The robot includes a body and legs. The method also includes generating, by the data processing hardware, a body-obstacle map, a ground height map, and a step-obstacle map based on the image data and generating, by the data processing hardware, a body path for movement of the body of the robot while maneuvering in the environment based on the body-obstacle map. The method also includes generating, by the data processing hardware, a step path for the legs of the robot while maneuvering in the environment based on the body path, the body-obstacle map, the ground height map, and the step-obstacle map.
A method for estimating a ground plane includes receiving a pose of a robotic device with respect to a gravity aligned reference frame, receiving one or more locations of one or more corresponding contact points between the robotic device and a ground surface, and determining a ground plane estimation of the ground surface based on the orientation of the robotic device with respect to the gravity aligned reference frame and the one or more locations of one or more corresponding contact points between the robotic device and the ground surface. The ground plane estimation includes a ground surface contour approximation. The method further includes determining a distance between a body of the robotic device and the determined ground plane estimation and causing adjustment of the pose of the robotic device with respect to the ground surface based on the determined distance and the determined ground plane estimation.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
G05D 1/08 - Control of attitude, i.e. control of roll, pitch, or yaw
A method for generating intermediate waypoints for a navigation system of a robot includes receiving a navigation route. The navigation route includes a series of high-level waypoints that begin at a starting location and end at a destination location and is based on high-level navigation data. The high-level navigation data is representative of locations of static obstacles in an area the robot is to navigate. The method also includes receiving image data of an environment about the robot from an image sensor and generating at least one intermediate waypoint based on the image data. The method also includes adding the at least one intermediate waypoint to the series of high-level waypoints of the navigation route and navigating the robot from the starting location along the series of high-level waypoints and the at least one intermediate waypoint toward the destination location.
An example method may include i) determining a first distance between a pair of feet of a robot at a first time, where the pair of feet is in contact with a ground surface; ii) determining a second distance between the pair of feet of the robot at a second time, where the pair of feet remains in contact with the ground surface from the first time to the second time; iii) comparing a difference between the determined first and second distances to a threshold difference; iv) determining that the difference between determined first and second distances exceeds the threshold difference; and v) based on the determination that the difference between the determined first and second distances exceeds the threshold difference, causing the robot to react.
B25J 13/08 - Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid