Provided is a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor of a robot effectuates operations including: capturing, with at least one sensor, first data used in indicating a position of the robot; capturing, with at least one sensor, second data indicative of movement of the robot; recognizing, with the processor of the robot, a first area of the workspace based on at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area; actuating, with the processor of the robot, the robot to move along the first movement path; and generating, with the processor of the robot, a map of the workspace based on at least one of: the first data and the second data.
G05D 1/02 - Control of position or course in two dimensions
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
A robot configured to perceive a model of an environment, including: a chassis; a set of wheels; a plurality of sensors; a processor; and memory storing instructions that when executed by the processor effectuates operations including: capturing a plurality of data while the robot moves within the environment; perceiving the model of the environment based on at least a portion of the plurality of data, the model being a top view of the environment; storing the model of the environment in a memory accessible to the processor; and transmitting the model of the environment and a status of the robot to an application of a smartphone previously paired with the robot.
G01S 17/48 - Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
A method for determining at least one action of a robot, including capturing, with an image sensor disposed on the robot, images of objects within an environment of the robot as the robot moves within the environment; identifying, with a processor of the robot, at least one object based on the captured images; marking, with the processor, a location of the at least one object in a map of the environment; and actuating, with the processor, the robot to execute at least one action based on the at least one object identified.
Some aspects include a schedule development method for a robotic floor-cleaning device that recognizes patterns in user input to automatically devise a work schedule.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a robot including a chassis; a set of wheels coupled to the chassis; a plurality of sensors; a processor; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations. The operations include capturing, with an image sensor disposed on the robot, a plurality of images of an environment of the robot as the robot navigates within the environment; identifying, with the processor, an obstacle type of an obstacle captured in an image based on a comparison between features of the obstacle and features of obstacles with different obstacles types stored in a database; and determining, with the processor, an action of the robot based on the obstacle type of the obstacle.
A method for pairing a robotic device with an application of a communication device, including using, with the application, unique login information to log into the application; receiving, with the application, a SSID of a first Wi-Fi network to which the communication device is connected and a password for the first Wi-Fi network; entering, with the robotic device, a pairing mode upon the user pressing a button on a user interface of the robotic device or autonomously upon powering up for a first time; transmitting, with the application, the SSID and the password of the first Wi-Fi network to the robotic device; connecting, with the robotic device, the robotic device to the first Wi-Fi network using the SSID and the password of the first Wi-Fi network; receiving, with the application, information; and transmitting, with the application, at least some of the information to the robotic device.
H04W 76/11 - Allocation or use of connection identifiers
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
Provided is a system for robotic collaboration. A first robotic device includes a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor of the first robotic device effectuates first operations including: receiving first information from a processor of a second robotic device; actuating the first robotic device to execute a first action based on the first information; and transmitting second information to the processor of the second robotic device. The second robotic device includes a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor of the second robotic device effectuates second operations including: receiving the second information transmitted from the processor of the first robotic device; actuating the second robotic device to execute a second action based on the second information; and transmitting third information to the processor of the first robotic device.
Provided is a navigation system for a leader vehicle leading follower vehicles, including: the leader vehicle, configured to transmit, real-time movement data to follower vehicles; and, the follower vehicles, each comprising: a signal receiver for receiving the data from the leader vehicle; sensors configured to detect at least one maneuverability condition; a memory; a vehicle maneuver controller; a distance sensor; and a processor configured to: determine a route for navigating the local follower vehicle from an initial location; determine a preferred range of distances from the vehicle in front of the respective follower vehicle that the respective follower vehicle should stay within; determine a set of active maneuvering instructions for the respective follower vehicle based on at least a portion of the data received from the guiding vehicle; determine a lag in control commands; and, execute the set of active maneuvering instructions in the respective follower vehicle.
G08G 1/00 - Traffic control systems for road vehicles
G05D 1/02 - Control of position or course in two dimensions
B60W 30/165 - Control of distance between vehicles, e.g. keeping a distance to preceding vehicle automatically following the path of a preceding lead vehicle, e.g. "electronic tow-bar"
B60W 10/04 - Conjoint control of vehicle sub-units of different type or different function including control of propulsion units
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
9.
Artificial neural network based controlling of window shading system and method
Provided is a window shading system including a means for shading one or more windows; a means for manually controlling at least one window shading setting; one or more sensors; a processor; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including capturing, with the one or more sensors, environmental data of surroundings; predicting, with the processor, the at least one window shading setting using a learned function of an artificial neural network that relates the environmental data to the at least one window shading setting; and, applying, with the processor, the at least one window shading setting predicted to the window shading system.
Provided is a process that includes: obtaining a first version of a map of a workspace; selecting a first undiscovered area of the workspace; in response to selecting the first undiscovered area, causing the robot to move to a position and orientation to sense data in at least part of the first undiscovered area; and obtaining an updated version of the map mapping a larger area of the workspace than the first version.
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G05D 1/02 - Control of position or course in two dimensions
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01C 21/20 - Instruments for performing navigational calculations
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a robot including: a chassis; wheels; electric motors; a network card; sensors; a processor; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with at least one exteroceptive sensor, a first image and a second image; determining, with the processor, an overlapping area of the first image and the second image by comparing the raw pixel intensity values of the first image to the raw pixel intensity values of the second image; combining, with the processor, the first image and the second image at the overlapping area to generate a digital spatial representation of the environment; and estimating, with the processor using a statistical ensemble of simulated positions of the robot, a corrected position of the robot to replace a last known position of the robot within the digital spatial representation of the environment.
Some aspects include a schedule development method for a robotic floor-cleaning device that recognizes patterns in user input to automatically devise a work schedule.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
13.
Method for autonomously controlling speed of components and functions of a robot
Provided is a robot including main and peripheral brushes; a first actuator; a first sensor; one or more processors; and memory storing instructions that when executed by the one or more processors effectuate operations including: determining a first location of the robot in a working environment; obtaining, with the first sensor or another sensor, first data indicative of an environmental characteristic of the first location; adjusting a first operational parameter of the first actuator based on the sensed first data to cause the first operational parameter to be in a first adjusted state while the robot is at the first location; and forming or updating a debris map of the working environment based on data output by the first sensor or the another sensor configured to collect data indicative of an existence of debris on a floor of the working environment over at least one cleaning session.
G05D 1/02 - Control of position or course in two dimensions
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A47L 9/04 - Nozzles with driven brushes or agitators
B25J 11/00 - Manipulators not otherwise provided for
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a process executed by a robot, including: traversing, to a first position, a first distance in a backward direction; after traversing the first distance, rotating in a first rotation; after the first rotation, traversing, to a second position, a second distance in a third direction; after traversing the second distance, rotating 180 degrees in a second rotation such that the field of view of the sensor points in a fourth direction; after the second rotation, traversing, to a third position, a third distance in the fourth direction; after traversing the second distance, rotating 180 degrees in a third rotation such that the field of view of the sensor points in the third direction; and after the third rotation, traversing, to a fourth position, a fourth distance in the third direction.
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/55 - Depth or shape recovery from multiple images
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 3/00 - Geometric image transformation in the plane of the image
15.
Method for automatically removing obstructions from robotic floor-cleaning devices
Some embodiments include a robot, including: a plurality of sensors; at least one encoder; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: measuring, with the at least one encoder, wheel rotation of at least one wheel; capturing, with an image sensor, images of an environment as the robot moves within the environment; identifying, with the processor, at least one characteristic of at least one object captured in the images of the environment; determining, with the processor, an object type of the at least one object based on characteristics of different types of objects stored in an object database; and instructing, with the processor, the robot to execute at least one action based on at least one of: the object type of the at least one object and the measured wheel rotation of the at least one wheel.
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
G06K 9/62 - Methods or arrangements for recognition using electronic means
G05D 1/02 - Control of position or course in two dimensions
G06V 10/40 - Extraction of image or video features
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
16.
Method and apparatus for combining data to construct a floor plan
A robot adapted to capture a plurality of data; perceive a model of the environment based on the plurality of data; determine areas within which work was performed and areas within which work is yet to be performed; store the model of the environment in a memory accessible to the processor; and transmit the model of the environment and a status of the robot to an application of a smartphone previously paired with the robot.
G01S 17/48 - Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
A media storing instructions that when executed by a processor of a robot effectuates operations including detecting an object in a line of sight of at least one sensor; adjusting a current path of the robot to include a detour path around the object, instructing the robot to resume along the current path after avoiding the object, discounting areas of overlap from a total area covered based on at least some data collected by sensors, inferring previously visited areas and unvisited areas, generating a planar representation of a workspace of the robot by stitching data collected by at least some sensors of the robot at overlapping points, and transmitting the planar representation and coverage statistics to an application of a communication device configured to display the information.
A method for a robot to autonomously plan a navigational route and work duties in an environment of the robot including accessing historical sensor data stored from prior work cycles, determining the navigational route and work duties of the robot by processing probabilities based on the historical sensor data, enacting the navigational route and work duties by the robot, capturing new sensor data while the robot enacts the navigational route and work duties, processing the new sensor data, and altering the navigational route and work duties based on the new sensor data processed.
A mop module of a robot, including: a liquid reservoir for storing liquid; and an electronically-controlled liquid release mechanism; wherein: the electronically-controlled liquid release mechanism releases liquid from the liquid reservoir for mopping a work surface; operation and a schedule of operation of the electronically-controlled liquid release mechanism in at least one area is controlled by a processor of the robot within which the mop module is installed or based on input provided to an application of a communication device paired with the robot; a liquid flow rate depends on at least an amount of power delivered to the electronically-controlled liquid release mechanism; and the liquid flow rate for the at least one area is determined by the processor of the robot within which the mop module is installed or an input provided to the application of the communication device paired with the robot.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
A47L 13/58 - Wringers for scouring pads, mops, or the like, combined with buckets
Provided is a robotic device, including: a chassis; a set of wheels; one or more motors to drive the set of wheels; a suspension system; a controller in communication with the one or more motors; at least one sensor; a camera; one or more processors; a tangible, non-transitory, machine readable medium storing instructions that when executed by the one or more processors effectuate operations including: capturing, with the camera, spatial data of surroundings; generating, with the one or more processors, a spatial model of the surroundings based on the spatial data; generating, with the one or more processors, a movement path based on the spatial model of the surroundings; inferring, with the one or more processors, a location of the robotic device; and updating, with the one or more processors, the movement path to exclude locations of the movement path that the robotic device has previously been located.
A method for identifying objects for autonomous robots, including: capturing, with an image sensor disposed on an autonomous robot, images of a workspace, wherein a field of view of the image sensor captures at least an area in front of the autonomous robot; obtaining, with a processing unit disposed on the autonomous robot, the images; generating, with the processing unit, a feature vector from the images; comparing, with the processing unit, at least one object captured in the images to objects in an object dictionary; identifying, with the processing unit, a class to which the at least one object belongs; and executing, with the autonomous robot, instructions based on the class of the at least one object identified.
Provided is provide a robotic device, including: a chassis; a set of wheels; a control system; a battery; one or more sensors; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with the one or more sensors, data of an environment of the robotic device and data indicative of movement of the robotic device; generating or updating, with the processor, a map of the environment based on at least a portion of the captured data; inferring, with the one or more processors of the robotic device, a current location of the robotic device, and generating or updating, with the processor, a movement path of the robotic device based on at least the map of the environment, at least a portion of the captured data, and the inferred current location of the robotic device.
Some aspects include a schedule development method for a robotic floor-cleaning device that recognizes patterns in user input to automatically devise a work schedule.
G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
G05B 19/042 - Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided are operations including: receiving, with one or more processors of a robot, an image of an environment from an imaging device separate from the robot; obtaining, with the one or more processors, raw pixel intensity values of the image; extracting, with the one or more processors, objects and features in the image by grouping pixels with similar raw pixel intensity values, and by identifying areas in the image with greatest change in raw pixel intensity values; determining, with the one or more processors, an area within a map of the environment corresponding with the image by comparing the objects and features of the image with objects and features of the map; and, inferring, with the one or more processors, one or more locations captured in the image based on the location of the area of the map corresponding with the image.
Provided is an integrated bumper system including an integrated bumper system of a robot, including a bumper elastically coupled with a chassis of the robot and comprising an opening in a top surface of the bumper; at least one elastic element coupled to the chassis and interfacing with the bumper; and a bridging element coupled with the chassis or another component of the robot, wherein the bumper slides relative to the bridging element upon an impact and a release of the impact; a perimeter of the opening of the bumper is hidden beneath the bridging element when the bumper is in a neutral position and an impacted position; and the perimeter of the opening of the bumper remains hidden beneath the bridging element during movement of the bumper caused by the impact and the release of the impact.
B62D 24/04 - Vehicle body mounted on resilient suspension for movement relative to the vehicle frame
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
B62D 27/04 - Connections between superstructure sub-units resilient
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G05D 1/02 - Control of position or course in two dimensions
26.
Method for constructing a map while performing work
Provided is a method including: capturing first data indicative of the position of the robot in relation to objects within the workspace and second data indicative of movement of the robot; recognizing, with a processor of the robot, a first area of the workspace based on at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, at least part of a map of the workspace based on at least one of: the first part of the first data and the first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area; actuating, with the processor of the robot, the robot to move along the first movement path.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/55 - Depth or shape recovery from multiple images
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/20 - Instruments for performing navigational calculations
Provided is a robot, including: a chassis; a set of wheels coupled to the chassis; a processor; and a tangible, non-transitory, machine-readable medium storing instructions that when executed by the processor effectuate operations including: capturing, by an image sensor disposed on a robot, images of a workspace; obtaining, by the processor of the robot or via the cloud, the captured images; comparing, by the processor of the robot or via the cloud, at least one object from the captured images to objects in an object dictionary; identifying, by the processor of the robot or via the cloud, a class to which the at least one object belongs using an object classification unit; and instructing, by the processor of the robot, the robot to execute at least one action based on the object class identified.
A system for robotic collaboration, including: a first robotic chassis and a second robotic chassis, each including wheels; a control system; a power supply; at least one sensor; a processor, and a medium storing instructions that when executed by the respective processor effectuates operations including: capturing data of an environment and data indicative of movement; generating a map of the environment based on at least some of the captured data; inferring a current location of the respective robotic chassis based on at least some of the captured data; and executing a portion of a task, the second robotic chassis executing a second part of the task after the first robotic chassis completes a first part of the task.
B62D 33/063 - Drivers' cabs movable from one position into at least one other position, e.g. tiltable, pivotable about a vertical axis, displaceable from one side of the vehicle to the other
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
29.
Method and system for collaborative construction of a map
Methods and systems for collaboratively constructing a map of an environment. One or more sensory devices installed on an autonomous vehicle take readings within a field of view of the sensory device. As the vehicle moves within the environment, the sensory device continuously takes readings within new fields of view. At the same time, sensory devices installed on other autonomous vehicles operating within the same environment and/or fixed devices monitoring the environment take readings within their respective fields of view. The readings recorded by a processor of each autonomous vehicle may be shared with all other processors of autonomous vehicles operating within the same environment with whom a data transfer channel is established. Processors combine overlapping readings to construct continuously growing segments of the map. Combined readings are taken by the same sensory device or by different sensory devices and are taken at the same time or at different times.
G06T 7/174 - Segmentation; Edge detection involving the use of two or more images
G01S 1/00 - Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith
30.
METHOD OF LIGHTWEIGHT SIMULTANEOUS LOCALIZATION AND MAPPING PERFORMED ON A REAL-TIME COMPUTING AND BATTERY OPERATED WHEELED DEVICE
Some aspects include a method for operating a wheeled device, including: capturing, by a primary sensor coupled to the wheeled device, primary sensor data indicative of a plurality of radial distances to objects; transforming, by a processor of the wheeled device, the plurality of radial distances from a perspective of the primary sensor to a perspective of the wheeled device; generating, by the processor, a partial map of visible areas in real-time at a first position of the wheeled device based on the primary sensor data and some secondary sensor data, wherein: the partial map is a bird's eye view; and the processor iteratively completes a full map of the environment based on new sensor data captured by sensors as the wheeled device performs work within the environment and new areas become visible to the sensors; and executing, by the wheeled device, a movement path to a second position.
G05D 1/02 - Control of position or course in two dimensions
G01C 21/12 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning
31.
Surface coverage optimization method for mobile robotic devices
A method for covering a surface by a robotic device including: generating a two-dimensional map of a workspace using data from at least a depth measurement device positioned on the robotic device, dividing the two-dimensional map into a grid of cells, identifying the cells as free, occupied, or unknown, localizing the robotic device within the two-dimensional map, identifying at least one frontier within the map for exploration, generating a spanning tree such that a movement path of the robotic device includes a repetition of movement in a first direction along a straight line, 180 degree rotation over a distance perpendicular to the first direction, movement in a second direction opposite the first direction along a straight line, and 180 degree rotation over a distance perpendicular to the second direction, and recording the number of collisions incurred and the areas covered by the robotic device while executing the movement path.
Provided is a first robot including: a machine readable medium storing instructions that when executed by the processor of the first robot effectuates operations including: executing, with the processor of the first robot, a task; and transmitting, with the processor of the first robot, a signal to a processor of a second robot during execution of the task when its power supply level reduces below a predetermined threshold; and the second robot including: a machine readable medium storing instructions that when executed by the processor of the second robot effectuates operations including: executing, with the processor of the second robot, the remainder of the task upon receiving the signal transmitted from the processor of the first robot; and wherein the first robot navigates to a charging station when its power supply level reduces below the predetermined threshold and wherein the first robot and second robot provide the same services.
B62D 33/063 - Drivers' cabs movable from one position into at least one other position, e.g. tiltable, pivotable about a vertical axis, displaceable from one side of the vehicle to the other
G05D 1/02 - Control of position or course in two dimensions
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
33.
Method and apparatus for combining data to construct a floor plan
A robot for perceiving a spatial representation of an environment, including: an actuator, at least one sensor, a processor, and memory storing instructions that when executed by the processor effectuates operations including: capturing a plurality of data by the at least one sensor of the robot, wherein: the plurality of data comprises first data comprising pixel characteristics indicative of features of the environment and second data indicative of depth to objects in the environment; the plurality of data is captured from different positions within the environment through which the robot moves, the plurality of data corresponding with respective positions from which the plurality of data was captured; and the plurality of data captured from different respective positions within the environment corresponds to respective fields of view; and aligning the plurality of data as it is captured to more accurately perceive the spatial representation of the environment.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
Methods for utilizing virtual boundaries with robotic devices are presented including: positioning a boundary component having a receiver pair to receive a first robotic device signal substantially simultaneously by each receiver of the receiver pair from a robotic device only when the robotic device is positioned along a virtual boundary; operating the robotic device to move automatically within an area co-located with the virtual boundary; transmitting the first robotic device signal by the robotic device; and receiving the first robotic device signal by the receiver pair thereby indicating that the robotic device is positioned along the virtual boundary.
Provided is a wheeled device, including: a chassis; a set of wheels coupled to the chassis; one or more electric motors to rotate the set of wheels; a network card for wireless connection to the internet; a plurality of sensors; a processor electronically coupled to the plurality of sensors; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with at least one exteroceptive sensor, measurement readings of the environment; and estimating, with the processor using a statistical ensemble of simulated positions of the wheeled device and the measurement readings, a corrected position of the wheeled device to replace a last known position of the wheeled device.
Provided is a robot, including: a chassis; a set of wheels coupled to the chassis; at least one motor; at least one motor controller; a range finding system; a plurality of sensors; a processor; and a medium storing instructions that when executed by the processor effectuates operations including: measuring, with the range finding system, distances to surfaces opposite the range finding system as the robot moves relative to the surfaces; monitoring, with the processor, the distance measurements taken by the range finding system; discarding, with the processor, outlier distance measurements that reflect an interruption in otherwise steadily fitting distance measurements taken as the robot moves towards and away from surrounding obstacles; and determining, with the processor, a position of an obstacle by identifying a position of the range finding system immediately before and after encountering the obstacle, signified by the interruption detected in the distance measurements.
Provided is a robotic device, including: a chassis; a set of wheels; one or more motors to drive the set of wheels; a controller in communication with the one or more motors; one or more surface cleaning tools; at least one sensor; a camera; one or more processors; a medium storing instructions that when executed by the one or more processors effectuate operations including: capturing, with the camera of the robotic device, spatial data of surroundings of the robotic device; generating, with the one or more processors of the robotic device, a movement path based on the spatial data of the surroundings; inferring, with the one or more processors of the robotic device, a location of the robotic device; and updating, with the one or more processors of the robotic device, the movement path to exclude locations of the movement path that the robotic device has previously been located.
Provided is a machine-readable medium storing instructions that when executed by a processor effectuate operations including: receiving, with an application executed by a communication device, a first set of inputs including user data; generating, with the application, a three-dimensional model of the user based on the user data; receiving, with the application, a second set of inputs including a type of clothing garment; generating, with the application, a first set of clothing garments including clothing garments from a database of clothing garments that are the same type of clothing garment; generating, with the application, a second set of clothing garments from the first set of clothing garments based on the user data and one or more relationships between clothing attributes and human attributes; and presenting, with the application, the clothing garments from the second set of clothing garments virtually fitted on the three-dimensional model of the user.
Provided is a method including: capturing, with at least one sensor of a robot, first data indicative of the position of the robot in relation to objects within the workspace and second data indicative of movement of the robot; recognizing, with a processor of the robot, a first area of the workspace based on observing at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, at least part of a map of the workspace based on at least one of: the first part of the first data and the first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area; actuating, with the processor of the robot, the robot to move along the first movement path.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/20 - Instruments for performing navigational calculations
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 3/00 - Geometric image transformation in the plane of the image
Provided is a method for operating a robot, including: capturing images of a workspace; capturing movement data indicative of movement of the robot; capturing LIDAR data as the robot performs work within the workspace; comparing at least one object from the captured images to objects in an object dictionary; identifying a class to which the at least one object belongs; generating a first iteration of a map of the workspace based on the LIDAR data; generating additional iterations of the map based on newly captured LIDAR data and newly captured movement data; actuating the robot to drive along a trajectory that follows along a planned path by providing pulses to one or more electric motors of wheels of the robot; and localizing the robot within an iteration of the map by estimating a position of the robot based on the movement data, slippage, and sensor errors.
Provided is a robotic device including a medium storing instructions that when executed by one or more processors effectuate operations including: capturing, with a camera, spatial data of surroundings; generating, with the one or more processors, a movement path based on the spatial data; predicting, with the one or more processors, a new predicted state of the robotic device including at least a predicted position of the robotic device, wherein predicting the new predicted state includes: capturing, with at least one sensor, movement readings of the robotic device; predicting, with the one or more processors, the new predicted state using a motion model of the robotic device based on a previous predicted state of the robotic device and the movement readings; and updating, with the one or more processors, the movement path to exclude locations of the movement path that the robotic device has previously been predicted to be positioned.
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
42.
Method for tracking movement of a mobile robotic device
Provided is a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing visual readings to objects within an environment; capturing readings of wheel rotation; capturing readings of a driving surface; capturing distances to obstacles; determining displacement of the robotic device in two dimensions based on sensor readings of the driving surface; estimating, with the processor, a corrected position of the robotic device to replace a last known position of the robotic device; determining a most feasible element in an ensemble based on the visual readings; and determining a most feasible position of the robotic device as the corrected position based on the most feasible element in the ensemble and the visual readings.
Methods for detecting an alignment of a robot with a virtual line, including: transmitting, with at least one transmitter of the robot, a first signal; receiving, with a first receiver and a second receiver of a device, the first signal; detecting, with the device, that the robot is aligned with the virtual line when the first receiver and the second receiver of the device simultaneously receive the first signal; transmitting, with at least one transmitter of the device, a second signal indicating that the robot is aligned with the virtual line; receiving, with at least one receiver of the robot, the second signal; actuating, with a processor of the robot, the robot to execute a movement upon receiving the second signal; and marking, with the processor of the robot, the location of the device in a map of an environment of the robot.
An autonomous mobile robotic device that may carry and transport one or more items within an environment. The robotic device may comprise a platform on which the one or more items may be placed. The robotic device may pick up, deliver, distribution and/or transport the one or more items to one or more locations. The robotic device may be provided with scheduling information for task execution or for pick up, delivery, distribution and/or transportation of one or more items. Once tasks are complete, the robotic device may autonomously navigate to a storage location.
Provided is a method including: capturing, with at least one sensor of a robot, first data indicative of the position of the robot in relation to objects within the workspace and second data indicative of movement of the robot; recognizing, with a processor of the robot, a first area of the workspace based on observing at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, at least part of a map of the workspace based on at least one of: the first part of the first data and the first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area; actuating, with the processor of the robot, the robot to move along the first movement path.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/20 - Instruments for performing navigational calculations
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 3/00 - Geometric image transformation in the plane of the image
Provided is a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor effectuates operations including: capturing, with at least one exteroceptive sensor, readings of an environment and capturing, with at least one proprioceptive sensor, readings indicative of displacement of a wheeled device; estimating, with the processor using an ensemble of simulated positions of possible new locations of the wheeled device, the readings of the environment, and the readings indicative of displacement, a corrected position of the wheeled device to replace a last known position of the wheeled device; determining, by the processor using the readings of the exteroceptive sensor, a most feasible position of the wheeled device as the corrected position; and, transmitting, by the processor, status information of tasks performed by the wheeled device to an external processor, wherein the status information initiates a second wheeled device to perform a second task.
Systems and methods for sending scheduling information to a mobile robotic device from an application of a communication device. The application of the communication device generates at least one scheduling command and transmits the at least one scheduling command to a router using a first wireless communication channel. The router is configured to transmit and receive the at least one scheduling command to and from at least one cloud service. A charging station of the robotic device receives the at least one scheduling command from the router using the first wireless communication channel and stores the at least one scheduling command on the charging station. The charging station transmits the at least one scheduling command to a processor of the robotic device using a second wireless communication channel and the processor of the robotic device modifies its scheduling information based on the at least one scheduling command.
A retractable cable assembly in use with an electrical charger, power adapter, or other power supply. A cable wound on a spool disposed within a housing may be extracted by manually pulling on the cable or pressing of a release switch until the desired length of the cable is drawn. As the cable is drawn an engaged locking mechanism is used to keep the cable in place during and after extraction of the cable until which time retraction of the cable is desired. Rotation or twisting of at least a portion of the housing disengages the locking mechanism, thereby freeing the cable and immediately retracting the cable within the housing.
Provided is a robotic towing device including: a mobile robotic chassis; a set of wheels coupled to the mobile robotic chassis; one or more motors; one or more processors; a casing coupled to the mobile robotic chassis; one or more arms coupled to the mobile robotic chassis on a first end; one or more lifts, each of the one or more lifts corresponding and coupled to one of the one or more arms on a second end; and one or more lift wheels, each of the one or more lift wheels corresponding and coupled to a terminal end of one of the one or more lifts.
B60P 3/07 - Vehicles adapted to transport, to carry or to comprise special loads or objects for carrying vehicles for carrying road vehicles
B62D 61/10 - Motor vehicles or trailers, characterised by the arrangement or number of wheels, not otherwise provided for, e.g. four wheels in diamond pattern with more than four wheels
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
50.
Method and apparatus for overexposing images captured by drones
Provided is a method for overexposing images captured by a camera of a camera carrying device, including: providing a camera disabling apparatus within an environment, including: a housing; a camera disposed within the housing; a movable high power light source; a motor coupled to the high power light source; and a processor for detecting the camera carrying device in captured images of the environment; capturing, with the camera, an image of the environment; detecting, with the processor, the camera carrying device in the captured image; activating, with the processor, a light beam of the high power light source when the camera carrying device is detected in the captured image; and actuating, with the processor, the motor to direct the light beam of the high power light source towards the camera carrying device such that images captured by the camera of the camera carrying device are overexposed.
Included is a method for autonomous robotic refuse container replacement including: transmitting, by a processor of a first robotic refuse container, a request for replacement to a portion of processors of robotic refuse containers; receiving, by the processor of the first robotic refuse container, a return signal from a portion of processors of the robotic refuse containers; transmitting, by the processor of the first robotic refuse container, a confirmation for replacement to a processor of a second robotic refuse container in response to a return signal received from the processor of the second robotic refuse container; instructing, by the processor of the first robotic refuse container, the first robotic refuse container to navigate to a second location from a current location; and instructing, by the processor of the second robotic refuse container, the second robotic refuse container to navigate to the current location of the first robotic refuse container.
Included is a refuse bag replacement method including: detecting, by one or more sensors of the robotic refuse container, a refuse bag fill level; instructing, by a processor of the robotic refuse container, the robotic refuse container to navigate to a refuse collection site upon detecting a predetermined refuse bag fill level; and instructing, by the processor of the robotic refuse container, the robotic refuse container to discard a refuse bag housed within the robotic refuse container at the refuse collection site.
Provided is an autonomous versatile robotic chassis, including: a chassis; a set of wheels coupled to the chassis; one or more motors to drive the set of wheels; one or more mounting elements; at least one food equipment coupled to the robotic chassis using the one or more mounting elements; a processor; one or more sensors; a camera; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: generating, with the processor, a map of an environment; localizing, with the processor, the robotic chassis; receiving, with the processor, a request for delivery of a food item to a first location; generating, with the processor, a movement path to the first location from a current location; and instructing, with the processor, the robotic chassis to transport the food item to the first location by navigating along the movement path.
G06N 7/00 - Computing arrangements based on specific mathematical models
B60N 3/10 - Arrangements or adaptations of other passenger fittings, not otherwise provided for of receptacles for food or beverages, e.g. refrigerated
54.
Efficient coverage planning of mobile robotic devices
Provided is a robot-implemented process to create a coverage plan for a work environment, including obtaining, with a robotic device, raw data values of a work environment pertaining to likelihood of operational success and presence or absence of operational hazards contained within the work environment; determining, with one or more processors, the most efficient coverage plan for the robotic device based on the raw data values; and enacting the coverage plan based on the values from the data.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
Some aspects include a method including: generating, with a processor of the robot, a map of the workspace; segmenting, with the processor of the robot, the map into a plurality of zones; transmitting, with the processor of the robot, the map to an application of a communication device; receiving, with the application, the map; displaying, with the application, the map; receiving, with the application, at least one input for the map; implementing, with the application, the at least one input into the map to generate an updated map of the workspace; transmitting, with the application, the updated map to the processor of the robot; receiving, with the processor of the robot, the updated map; generating, with the processor of the robot, a movement path based on the map or the updated map; and actuating, with the processor of the robot, the robot to traverse the movement path.
Provided is an autonomous mobile robotic device that may carry, transport, and deliver one or more items in a work environment to predetermined destinations. The robotic device may comprise a container in which the one or more items may be placed. Once tasks are complete, the robotic device may autonomously navigate to a predetermined location.
Included is a method for collaboration between a first robotic chassis and a second robotic chassis, including: executing, with the processor of the first robotic chassis, a first part of a task; transmitting, with the processor of the first robotic chassis, a signal to a processor of the second robotic chassis upon completion of the first part of the task; and executing, with the processor of the second robotic chassis, a second part of the task upon receiving the signal transmitted from the processor of the first robotic chassis; wherein the first robotic chassis and the second robotic chassis provide differing services.
B62D 33/063 - Drivers' cabs movable from one position into at least one other position, e.g. tiltable, pivotable about a vertical axis, displaceable from one side of the vehicle to the other
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/02 - Control of position or course in two dimensions
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
Provided is a robotic device including: a chassis including a set of wheels; one or more motors for driving the set of wheels; a suspension system; a rechargeable battery for providing power to the robotic device; a controller for controlling movement of the robotic device; a processor; a set of sensors; and, a signal boosting device. Further provided is a method for providing a mobile signal boost including: providing a robotic device including: a chassis including a set of wheels; a motor for driving the set of wheels; a suspension system; a rechargeable battery for providing power to the device; a control system module for controlling the movement of the device; a processor; and, a set of sensors; providing a signal boosting device coupled to the robotic device; and, transporting the signal boosting device to one or more locations within an environment of the robotic device by the robotic device.
Included is a method for preventing a mobile robotic device from becoming stuck during a work session including: selecting, by a control system of the mobile robotic device, one or more actions to navigate through a workspace, wherein each action transitions the mobile robotic device from a current state to a next state; actuating, by the control system of the mobile robotic device, the mobile robotic device to execute the selected one or more actions; detecting, by the control system of the mobile robotic device, whether a collision is incurred by the mobile robotic device for each action executed; and, calculating and assigning, by the control system of the mobile robotic device, more than one level of rewards for each action executed based on collisions incurred by the mobile robotic device and completion of the action.
Provided is a method including emitting, with a laser light emitter disposed on a robot, a collimated laser beam projecting a light point on a surface opposite the laser light emitter; capturing, with each of at least two image sensors disposed on the robot, images of the projected light point; overlaying, with a processor of the robot, the images captured by the at least two image sensors to produce a superimposed image showing both captured images in a single image; determining, with the processor of the robot, a first distance between the projected light points in the superimposed image; and determining, with the processor, a second distance based on the first distance using a relationship that relates distance between light points with distance between the robot or a sensor thereof and the surface on which the collimated laser beam is projected.
A removable dustbin for a robotic vacuum that is wholly separable from all electronic parts thereof including a motor unit such that the dustbin, when separated from the electronic parts, may be safely immersed in water for quick and easy cleaning. The dustbin design further facilitates easy access to the motor for convenient servicing and repair.
A47L 9/14 - Bags or the like; Attachment of, or closures for, bags
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a latch mechanism including: a main body including a first enclosure and a second enclosure; a latch including: a latch body positioned within and slidingly coupled with the first enclosure of the main body; a latch lock fixed to the latch body; and a handle fixed to the latch body; and a latch locking mechanism including: a stopper positioned within and slidingly coupled with the second enclosure of the main body; a limit bar fixed to the latch body; a spring holder fixed to the latch body; and a spring positioned on the spring holder.
Provided is a mopping extension attachable to a robotic floor cleaning device including: a fluid reservoir for storing one or more cleaning fluids; a cloth for receiving the one or more cleaning fluids, wherein the cloth is oriented toward the work surface; and, at least one or more dispersed nozzle sets for controlling delivery of the one or more cleaning fluids to the cloth.
A47L 5/00 - Structural features of suction cleaners
A47L 7/00 - Suction cleaners adapted for additional purposes; Tables with suction openings for cleaning purposes; Containers for cleaning articles by suction; Suction cleaners adapted to cleaning of brushes; Suction cleaners adapted to taking-up liquids
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
65.
Method for developing navigation plan in a robotic floor-cleaning device
Included is a method of path planning for a robotic device, including: receiving, by a processor of the robotic device, a sequence of one or more commands; executing, via the robotic device, the sequence of one or more commands; saving the sequence of one or more commands in memory of the robotic device after a predetermined amount of time from receiving a most recent one or more commands; and re-executing the saved sequence of one or more commands.
A method including detecting an object in a line of sight of at least one sensor; adjusting a current path of the robot to include a detour path around the object, instructing the robot to resume along the current path after avoiding the object, discarding at least some data collected by sensors of the robot in overlapping areas covered, inferring previously visited areas and unvisited areas, generating a planar representation of a workspace of the robot by stitching data collected by at least some sensors of the robot at overlapping points, and presenting at least the planar representation and coverage statistics on an application of a communication device.
Provided is an autonomous hospital bed including: a frame; wheels; motors to drive the wheels; a controller in communication with the motors; sensors; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuate operations including: capturing, with the sensors, depth data indicating distances to objects within an environment of the hospital bed and directions of the distances; capturing, with the sensors, movement data indicating movement distance and direction of the hospital bed; generating, with the processor, a map of the environment using the depth and movement data; generating, with the processor, a movement path to a first location; instructing, with the processor, motor drivers of the wheels to move the hospital bed along the movement path; and, inferring, with the processor, a location of the hospital bed within the environment as the hospital bed navigates along the movement path.
Provided is a robot, including: a plurality of sensors; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with an image sensor, images of a workspace as the robot moves within the workspace; identifying, with the processor, at least one characteristic of at least one object captured in the images of the workspace; determining, with the processor, an object type of the at least one object based on characteristics of different types of objects stored in an object dictionary; and instructing, with the processor, the robot to execute at least one action based on the object type of the at least one object.
Provided is a method for operating a robot, including capturing images of a workspace, comparing at least one object from the captured images to objects in an object dictionary, identifying a class to which the at least one object belongs using an object classification unit, instructing the robot to execute at least one action based on the object class identified, capturing movement data of the robot, and generating a planar representation of the workspace based on the captured images and the movement data, wherein the captured images indicate a position of the robot relative to objects within the workspace and the movement data indicates movement of the robot.
The present disclosure provides a built-in robotic floor cleaning system installed within the infrastructure of a workspace and a method for controlling and integrating such system in a workspace. The built-in robotic floor cleaning system comprises a robotic floor cleaning device and a docking station for charging the robotic floor cleaning device wherein the docking station is built into the infrastructure of the workspace. The system may further comprise a control panel integrated into the infrastructure of the workspace to deliver inputs from users and display outputs from the system. The system may further comprise a variety of types of confinement methods built into the infrastructure of the workspace to aid the robotic floor cleaning device in navigation. The system may also be provided with a virtual map of the environment during an initial set-up phase to assist with navigation.
Included is a method for a mobile automated device to detect and avoid edges including: providing one or more rangefinder sensors on the mobile automated device to calculate, continuously or periodically, distances from the one or more rangefinder sensor to a surface; monitoring, with a processor of the mobile automated device, the distances calculated by each of the one or more rangefinder sensors; and actuating, with the processor of the mobile automated device, the mobile automated device to execute one or more predetermined movement patterns upon the processor detecting a calculated distance greater than a predetermined amount, wherein the one or more movement patterns initiate movement of the mobile automated device away from the area where the increase was detected.
B25J 5/00 - Manipulators mounted on wheels or on carriages
B25J 11/00 - Manipulators not otherwise provided for
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A47L 9/04 - Nozzles with driven brushes or agitators
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
G01S 15/931 - Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
72.
Methods for finding the perimeter of a place using observed coordinates
Provided is a method for navigating and mapping a workspace, including: obtaining a stream of spatial data indicative of a robot's position in a workspace, the stream of spatial data being based on at least output of a first sensor; obtaining a stream of movement data indicative of the robot's displacement in the workspace, the stream of movement data being based on at least output of a second sensor of different type than the first sensor; navigating along a path of the robot in the workspace based on the stream of spatial data; while navigating, mapping at least part of the workspace based on the stream of spatial data to form or update a spatial map in memory; and switching to a second mode of operation if the stream of spatial data is unavailable due to the first sensor becoming impaired or inoperative.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/20 - Instruments for performing navigational calculations
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 3/00 - Geometric image transformation in the plane of the image
Provided is a tennis playing robotic device including: a chassis; a set of wheels; one or more motors to drive the wheels; one or more processors; one or more sensors; one or more arms pivotally coupled to the chassis; and one or more tennis rackets, each of the one or more tennis rackets being coupled to a terminal end of a corresponding arm of the one or more arms.
A tangible, non-transitory, machine readable medium storing instructions that when executed by an image processor effectuates operations including: causing the camera to capture one or more images of an environment of the robotic device; receiving, with the image processor, one or more multidimensional arrays including at least one parameter that describes a feature included in the one or more images, wherein values of the at least one parameter correspond with pixels of a corresponding one or more images of the feature; determining, with the image processor, an amount of asymmetry of the feature in the one or more images based on at least a portion of the values of the at least one parameter; and, transmitting, with the image processor, a signal to the processor of the controller to adjust a heading of the robotic device by an amount proportional to the amount of asymmetry of the feature.
A method for efficient navigational and work duty planning for mobile robotic devices. A mobile robotic device will autonomously create a plan for navigation and work duty functions based on data compiled regarding various considerations in the work environment. These factors include what type of work surface is being operated on, whether dynamic obstacles are present in the work environment or not and the like factors.
Provided is a method for operating a robot, including capturing images of a workspace, comparing at least one object from the captured images to objects in an object dictionary, identifying a class to which the at least one object belongs using an object classification unit, instructing the robot to execute at least one action based on the object class identified, capturing movement data of the robot, and generating a planar representation of the workspace based on the captured images and the movement data, wherein the captured images indicate a position of the robot relative to objects within the workspace and the movement data indicates movement of the robot.
A method for confining and/or modifying the movement of robotic devices by means of a boundary component. The boundary component is placed within an area co-located with the robotic device. The boundary component has a predetermined surface indentation pattern that may be discerned by a sensor component installed onto the robotic device. A robotic device configured with a line laser emitting diode, an image sensor, and an image processor detects predetermined indentation patterns of surfaces within a specific environment. The line laser diode emits the line laser upon surfaces within the filed of view of the image sensor. The image sensor captures images of the projected line laser and sends them to the image processor. The image processor iteratively compares received images against a predetermined surface indentation pattern of the boundary component. Once the predetermined indentation pattern is detected the robotic device may mark the location within the working map of the environment. This marked location, and hence boundary component, may be used in confining and/or modifying the movements of the robotic device within or adjacent to the area of the identified location. This may include using the marked location to avoid or stay within certain areas or execute pre-programmed actions in certain areas.
Provided are operations including: receiving, with one or more processors of a robot, an image of an environment from an imaging device separate from the robot; obtaining, with the one or more processors, raw pixel intensity values of the image; extracting, with the one or more processors, objects and features in the image by grouping pixels with similar raw pixel intensity values, and by identifying areas in the image with greatest change in raw pixel intensity values; determining, with the one or more processors, an area within a map of the environment corresponding with the image by comparing the objects and features of the image with objects and features of the map; and, inferring, with the one or more processors, one or more locations captured in the image based on the location of the area of the map corresponding with the image.
A charging station for a mobile robotic vacuum using a folding dual prong system to recharge the battery of a mobile robotic vacuum. The electrical connector node contacts which charge the mobile robotic vacuum are placed on these dual prongs. The prongs extend outward from the charging station when charging is required for the mobile robotic vacuum's battery. When not charging, the prongs are retracted back into the charging station in order to protect the prongs and the electrical charging nodes.
H01M 10/46 - Accumulators structurally combined with charging apparatus
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
G05D 1/02 - Control of position or course in two dimensions
A vibrating air filter of a robotic vacuum comprising an electromagnet, a permanent magnet and an air filter. The electromagnet may comprise a metal wire and a power source connected to a first end of the wire. The power source may deliver electric pulses in alternating directions through the wire creating an electromagnet. The metal wire may be coiled around or placed adjacent to the permanent magnet and a second end of the wire may be connected to an air filter. Interaction between the magnetic fields of the electromagnet and permanent magnet may cause vibration of the wire and hence connected filter. Vibration of the filter may loosen any dust and debris latched onto the filter that may be shed into a dust bin of the vacuum.
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
Provided is a robotic device including: a body; an electronic computing device housed within the body; and at least two wheel suspension systems coupled with the body including: a first suspension system including: a frame; a rotating arm pivotally coupled to the frame on a first end and coupled to a wheel on a second end; and an extension spring coupled with the rotating arm on a third end and the frame on a fourth end, wherein the extension spring is extended when the wheel is retracted; and a second suspension system including: a base slidingly coupled with the frame; a plurality of vertically positioned extension springs coupled with the frame on a fifth end and the base on a sixth end; at least one set of paired magnets, with at least one magnet affixed to the frame and paired to at least one magnet affixed to the base.
B60G 3/26 - Means for maintaining substantially-constant wheel camber during suspension movement
B60G 21/05 - Interconnection systems for two or more resiliently-suspended wheels, e.g. for stabilising a vehicle body with respect to acceleration, deceleration or centrifugal forces permanently interconnected mechanically between wheels on the same axle but on different sides of the vehicle, i.e. the left and right wheel suspensions being interconnected
B60G 3/14 - Resilient suspensions for a single wheel with a single pivoted arm the arm being essentially parallel to the longitudinal axis of the vehicle the arm being rigid
Provided is a method for establishing and maintaining a user loyalty metric to accesses a plurality of robotic device functions including: receiving biometric data associated with a user; authenticating the user; providing a time access memory, wherein the time access memory comprises a plurality of memory cells; assigning a predetermined time slot to each of the plurality of memory cells, wherein each of the plurality of memory cells is available for writing only during the predetermined time slot, after which each memory cell is made read-only; storing the biometric data of the user if the user is authenticated within a currently available memory cell of the time access memory; increasing the user loyalty metric if the user is authenticated; and, providing access to the plurality of robotic device functions in accordance with the user loyalty metric.
A method including: positioning sensors on a robotic device; positioning a camera on the robotic device; capturing an image of the environment; measuring color depth of each pixel in the image; classifying each pixel into a color depth range; determining for at least one set of two points captured in the image, if the color depth of pixels measured in a region between the two points is within a predetermined range of color; generating at least one line between the two points when the color depth of pixels measured in the region between the two points is within the predetermined range of color; identifying on a map of the environment a wall surface on which the line is generated as a flat wall surface; and adjusting a heading of the robotic device relative to an angle of the wall surface.
Provided is a robotic cooking device including: a chassis; a set of wheels; a processor; an actuator; one or more sensors; one or more motors; and one or more cooking devices. An application of a communication device wirelessly connected to the robotic cooking device is used for one or more of: choosing settings of the robotic cooking device, choosing a location of the robotic cooking device, adjusting or generating a map of the environment, adjusting or generating a navigation path of the robotic cooking device, adjusting or generating boundaries of the robotic cooking device, and monitoring a food item within the one or more cooking devices.
G05D 23/19 - Control of temperature characterised by the use of electric means
F04D 27/00 - Control, e.g. regulation, of pumps, pumping installations or pumping systems specially adapted for elastic fluids
G05D 1/02 - Control of position or course in two dimensions
F24C 1/16 - Stoves or ranges in which the fuel or energy supply is not restricted to solid fuel or to a type covered by a single one of groups ; Stoves or ranges in which the type of fuel or energy supply is not specified with special adaptation for travelling, e.g. collapsible
Provided is a method including capturing a plurality of images by at least one sensor of a robot; aligning, with a processor of the robot, data of respective images based on an area of overlap between the fields of view of the plurality of images; and determining, with the processor of the robot, based on alignment of the data, a spatial model of the environment.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
Provided is a method including capturing, by an image sensor disposed on a robot, images of a workspace; obtaining, by a processor of the robot or via the cloud, the captured images; comparing, by the processor of the robot or via the cloud, at least one object from the captured images to objects in an object dictionary; identifying, by the processor of the robot or via the cloud, a class to which the at least one object belongs using an object classification unit; and instructing, by the processor of the robot, the robot to execute at least one action based on the object class identified.
A recharge station for a mobile robot and method for navigating to a recharge station. Two signal emitters on the recharge station emit uniquely identifiable signals in two separate ranges. A mobile robot is configured to look for the signals with two signal receivers, a left receiver looking for the signals of the left emitter and a right receiver looking for the signals of the right receiver. Upon sensing the left emitter signals with the left receiver and the right emitter signals with the right receiver, the mobile robot is aligned with the recharge station. The mobile robot is configured to then drive forward until charging contacts on the mobile robot make contact with charging contacts on the recharge station.
G05D 1/02 - Control of position or course in two dimensions
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
A computer-implemented method for improving range finding system such as LIDAR, sonar, depth camera, and the like distance readings during instances when the range finding system is tilted. A range finding system continuously takes distance measurements to surfaces opposite the range finding system. As the range finding system is moved toward (or away from) stationary surfaces, a processor examines the successive measurements taken by the range finding system. If the measurements reflect a steady decline (or increase) in distances, readings will be accepted as normal and the system will continue to operate normally. If the measurements reflect a steady decline (or increase) in distances interrupted by measurements at least a predetermined amount or percentage greater than the measurements immediately before and after the interruption, the interrupting measurements are flagged and discarded.
A system and method for devising a surface coverage scheme within a workspace. Space within a two-dimensional map of the workspace is identified as free, occupied, or unknown. The map is divided into a grid of cells. A loop-free spanning tree is constructed within all free cells within the grid. The robotic device is programmed to drive along the outside edge of the spanning tree to cover all portions of each free cell at least once upon completing the path. The system monitors several performance parameters during each work session and assigns negative rewards based on these parameters. A large positive reward is assigned upon completion of the surface coverage. Spanning trees with at least slight differences are used to determine which spanning tree produces the highest reward. The system is programmed to attempt maximize rewards at all times, causing the system to learn the best eventual method or policy for servicing the workspace.
A rotatable brush with a pressure sensor. The pressure sensor comprises a projecting blade connected to a tactile sensor by a flexible member. The projecting blade extends along the length of the shaft and is housed among the plurality of bristles protruding radially from the shaft. The projecting blade compresses the flexible member when pressure around the brush reaches a predetermined threshold. Upon compression of the flexible member, the tactile sensor, electronically coupled with a processor or controller, is activated thereby triggering a variety of possible preprogrammed responses.
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A46B 15/00 - Other brushes; Brushes with additional arrangements
A47L 9/04 - Nozzles with driven brushes or agitators
B08B 1/04 - Cleaning by methods involving the use of tools, brushes, or analogous members using rotary operative members
B08B 1/00 - Cleaning by methods involving the use of tools, brushes, or analogous members
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
A robotic surface cleaning device is provided, including a casing, a chassis, a set of wheels coupled to the chassis to drive the robotic surface cleaning device, a control system to instruct movement of the set of wheels, a battery to provide power to the robotic surface cleaning device, one or more sensors, a processor, rotating assembly, including a plate supported by a base of the casing, rotating mechanism to rotate the plate; and one or more cleaning apparatuses mounted to a first side of the plate.
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
97.
System and method for establishing virtual boundaries for robotic devices
Methods for utilizing virtual boundaries with robotic devices are presented including: positioning a boundary component having a receiver pair to receive a first robotic device signal substantially simultaneously by each receiver of the receiver pair from a robotic device only when the robotic device is positioned along a virtual boundary; operating the robotic device to move automatically within an area co-located with the virtual boundary; transmitting the first robotic device signal by the robotic device; and receiving the first robotic device signal by the receiver pair thereby indicating that the robotic device is positioned along the virtual boundary.