A removable mop attachment module, including a frame; a reservoir positioned within the frame; at least one drainage aperture positioned at a bottom of the reservoir; at least one breathing aperture positioned on the reservoir; and a pressure actuated valve positioned at least partially on an inner surface of the reservoir, covering the at least one breathing aperture.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
A distance estimation system comprised of a laser light emitter, two image sensors, and an image processor are positioned on a baseplate such that the fields of view of the image sensors overlap and contain the projections of an emitted collimated laser beam within a predetermined range of distances. The image sensors simultaneously capture images of the laser beam projections. The images are superimposed and displacement of the laser beam projection from a first image taken by a first image sensor to a second image taken by a second image sensor is extracted by the image processor. The displacement is compared to a preconfigured table relating displacement distances with distances from the baseplate to projection surfaces to find an estimated distance of the baseplate from the projection surface at the time that the images were captured.
A method for localizing an electronic device, including: capturing data of surroundings of the electronic device with at least one sensor of the electronic device; and inferring a location of the electronic device based on at least some of the data of the surroundings, wherein inferring the location of the electronic device includes: determining a probability of the electronic device being located at different possible locations within the surroundings based on the at least some of the data of the surroundings; and inferring the location of the electronic device based on the probability of the electronic device being located at different possible locations within the surroundings.
A robot including a main brush; a peripheral brush; a first actuator; a first sensor; processors; and memory storing instructions that when executed by the processors effectuate operations. The operations include determining a first location of the robot in a working environment; obtaining first data from the first sensor or another sensor indicative of a value of an environmental characteristic of the first location; adjusting a first operational parameter of the first actuator based on the sensed first data; and forming or updating a debris map of the working environment based on data output by the first sensor or the another sensor configured to collect data indicative of an existence of debris on a floor of the working environment over at least one cleaning session.
G05D 1/02 - Control of position or course in two dimensions
A47L 9/04 - Nozzles with driven brushes or agitators
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a mobile robotic device, including at least: a plurality of sensors; a processor; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations comprising: selecting, by the processor, one or more actions to navigate through a workspace, wherein each action transitions the mobile robotic device from a current state to a next state; actuating, by the processor, the mobile robotic device to execute the selected one or more actions; detecting, by the processor, whether a collision is incurred by the mobile robotic device for each action executed; and, assigning, by the processor, each collision to a location within a map of the workspace wherein the location corresponds to where the respective collision occurred.
A method for determining at least one action of a robot, including capturing, with an image sensor disposed on the robot, images of objects within an environment of the robot as the robot moves within the environment; identifying, with a processor of the robot, at least a first object based on the captured images; and actuating, with the processor, the robot to execute at least one action based on the first object identified, wherein the at least one action comprises at least generating a virtual boundary and avoiding crossing the virtual boundary.
Provided is an autonomous wheeled device. A first sensor obtains first data indicative of distances to objects within an environment of the autonomous wheeled device and a second sensor obtains second data indicative of movement of the autonomous wheeled device. A processor generates at least a portion of a map of the environment using at least one of the first data and the second data and a first path of the autonomous wheeled device. The processor transmits first information to an application of a communication device paired with the autonomous wheeled device and receives second information from the application.
A method for pairing a robotic device with an application of a communication device, including the application receiving an indication to pair the robotic device with the application; the application receiving a password for the first Wi-Fi network; the robotic device enabling pairing of the robotic device with the application upon the user pressing at least one button on a user interface of the robotic device; the application displaying a map of an environment of the robotic device and a status of the robotic device and receiving mapping, cleaning, and scheduling information.
H04W 76/11 - Allocation or use of connection identifiers
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
A robot including a medium storing instructions that when executed by a processor of the robot effectuates operations including: capturing images of a workspace as the robot moves within the workspace; identifying at least one characteristic of an object captured in the images of the workspace; determining an object type of the object based on an object dictionary of different types of objects, wherein the different object types comprise at least a cord, clothing garments, a shoe, earphones, and pet bodily waste; and instructing the robot to execute at least one action based on the object type of the object, wherein the at least one action comprises avoiding the object or cleaning around the object.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Provided is a medium storing instructions that when executed by one or more processors of a robot effectuate operations including: obtaining, with a processor, first data indicative of a position of the robot in a workspace; actuating, with the processor, the robot to drive within the workspace to form a map including mapped perimeters that correspond with physical perimeters of the workspace while obtaining, with the processor, second data indicative of displacement of the robot as the robot drives within the workspace; and forming, with the processor, the map of the workspace based on at least some of the first data; wherein: the map of the workspace expands as new first data of the workspace are obtained with the processor; and the robot is paired with an application of a communication device.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G01C 21/20 - Instruments for performing navigational calculations
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
G06T 3/00 - Geometric image transformation in the plane of the image
An autonomous mobile robotic refuse container device that transports itself from a storage location to a refuse collection location and back to the storage location after collection of the refuse. When it is time for refuse collection, the robotic device autonomously navigates from the refuse container storage location to the refuse collection location. Once the refuse within the container has been collected, the robotic device autonomously navigates back to the refuse container storage location.
Included is a surface cleaning service system including: one or more robotic surface cleaning devices, each including: a chassis; a set of wheels; one or more motors to drive the wheels; one or more processors; one or more sensors; and a network interface card, wherein the one or more processors of each of the one or more robotic surface cleaning devices determine respective usage data. A control system or the one or more processors of each of the one or more robotic surface cleaning devices is configured to associate each usage data with a particular corresponding robotic surface cleaning device of the one or more robotic surface cleaning devices.
G05D 1/02 - Control of position or course in two dimensions
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A robotic cleaner executing operations such as capturing data indicative of locations of objects in a workspace through which the robot moves; generating or updating a map of at least a part of the workspace based on at least the data; and navigating based on the map or an updated map of the workspace. The robotic cleaner may include a side brush with a main body with at least one attachment point and at least one bundle of bristles attached to the at least one attachment point of the main body, wherein the bristles are between 50 to 90 millimeters in length and positioned between 5 to 30 degrees with respect to a horizontal plane.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is an autonomous coverage robot including: a chassis; a set of wheels; a plurality of sensors; and a mopping assembly including: a fluid reservoir for storing a cleaning fluid; a cloth for receiving the cleaning fluid, wherein the cloth is oriented toward a work surface; a means to move at least the cloth of the mopping assembly up and down in a plane perpendicular to the work surface, wherein the means to move at least the cloth of the mopping assembly up and down is controlled automatically based on input provided by at least one of the plurality of sensors; and a means to move at least a portion of the mopping assembly back and forth in a plane parallel to the work surface.
A47L 7/00 - Suction cleaners adapted for additional purposes; Tables with suction openings for cleaning purposes; Containers for cleaning articles by suction; Suction cleaners adapted to cleaning of brushes; Suction cleaners adapted to taking-up liquids
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
A47L 5/00 - Structural features of suction cleaners
Provided is an electronic razor, including: a frame; one or more razor blades detachable from the frame; a razor blade motor to drive the one or more razor blades; one or more sensors; a processor; and a suctioning mechanism positioned below the one or more razor blades, including: a suction fan; a suction fan motor to drive the suction fan; and a hair collection compartment.
B26B 19/38 - Clippers or shavers operating with a plurality of cutting edges, e.g. hair clippers, dry shavers - Details of, or accessories for, hair clippers or dry shavers, e.g. housings, casings, grips or guards
B26B 19/44 - Suction means for collecting severed hairs or for the skin to be shaved
B26B 21/40 - Razors of the open or knife type; Safety razors or other shaving implements of the planing type; Hair-trimming devices involving a razor-blade; Equipment therefor - Details or accessories
B26B 19/02 - Clippers or shavers operating with a plurality of cutting edges, e.g. hair clippers, dry shavers of the reciprocating-cutter type
B26B 19/20 - Clippers or shavers operating with a plurality of cutting edges, e.g. hair clippers, dry shavers with provision for shearing hair of preselected or variable length
B26B 21/38 - Safety razors with one or more blades arranged transversely to the handle with provision for reciprocating the blade by means other than rollers
16.
Robotic floor cleaning device with controlled liquid release mechanism
A robotic floor cleaning device that features a controlled liquid releasing mechanism. A rotatable cylinder with at least one aperture for storing a limited quantity of liquid is connected to a non-propelling wheel of the robotic floor cleaning device. There is a passage below the cylinder and between the cylinder and a drainage mechanism. The cylinder is within or adjacent to a liquid reservoir. Each time an aperture is exposed to the liquid within the reservoir it fills with liquid. As the wheel turns the connected cylinder is rotated until the aperture is adjacent to the passage. The liquid in the aperture will flow through the passage and enter the drainage mechanism which disperses the liquid to the working surface. The release of liquid is halted when the connected wheel stops turning.
A47L 11/12 - Floor surfacing or polishing machines motor-driven with reciprocating or oscillating tools
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a system including at least two robots. A first robot includes a chassis, a set of wheels, a wheel suspension, sensors, a processor, and a machine-readable medium for storing instructions. A camera of the first robot captures images of an environment from which the processor generates or updates a map of the environment and determines a location of items within the environment. The processor extracts features of the environment from the images and determines a location of the first robot. The processor transmits information to a processor of a second robot and determines an action of the first robot and the second robot. A smart phone application is paired with at least the first robot and is configured to receive at least one user input specifying an instruction for at least the first robot and at least one user preference.
Some aspects provide a method for instructing operation of a robotic floor-cleaning device based on the position of the robotic floor-cleaning device within a two-dimensional map of the workspace. A two-dimensional map of a workspace is generated using inputs from sensors positioned on a robotic floor-cleaning device to represent the multi-dimensional workspace of the robotic floor-cleaning device. The two-dimensional map is provided to a user on a user interface. A user may adjust the boundaries of the two-dimensional map through the user interface and select settings for map areas to control device operation in various areas of the workspace.
Provided is a method including: capturing, with at least one sensor of a robot, first data indicative of the position of the robot in relation to objects within the workspace and second data indicative of movement of the robot; recognizing, with a processor of the robot, a first area of the workspace based on observing at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, at least part of a map of the workspace based on at least one of: the first part of the first data and the first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area; actuating, with the processor of the robot, the robot to move along the first movement path.
G06T 7/55 - Depth or shape recovery from multiple images
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 3/00 - Geometric image transformation in the plane of the image
Some embodiments include a robot, including: a chassis; a set of wheels coupled to the chassis; at least one encoder coupled to a wheel with a resolution of at least one count for every ninety degree rotation of the wheel; a trailing arm suspension coupled to each drive wheel for overcoming surface transitions and obstacles, wherein a first suspension arm is positioned on a right side of a right drive wheel and a second suspension arm is positioned on a left side of a left drive wheel; a roller brush; a collection bin; a fan with multiple blades for creating a negative pressure resulting in suction of dust and debris; a network card for wireless communication with at least one of: a computing device, a charging station, and another robot; a plurality of sensors; a processor; and a media storing instructions that when executed by the processor effectuates robotic operations.
G05D 1/02 - Control of position or course in two dimensions
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
A47L 5/22 - Structural features of suction cleaners with power-driven air-pumps or air-compressors, e.g. driven by motor vehicle engine vacuum with rotary fans
A47L 9/04 - Nozzles with driven brushes or agitators
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A method for centrally aligning a robot with an electronic device, including: transmitting, with at least one transmitter, a first signal; receiving, with a first receiver and a second receiver, the first signal; detecting, with a controller coupled to the first receiver and the second receiver, the robot is centrally aligned with the electronic device when the first receiver and the second receiver simultaneously receive the first signal, wherein a virtual line passing through a center of the robot and a center of the electronic device is aligned with a midpoint between the first receiver and the second receiver; and executing, with the robot, a particular movement type when the robot is aligned with the electronic device.
Provided is a robot, including: a chassis; a set of wheels coupled to the chassis; at least one motor for driving the set of wheels; at least one motor controller; a range finding system coupled to the robot; a plurality of sensors; a processor; and a tangible, non-transitory, machine-readable medium storing instructions that when executed by the processor effectuates operations including: obtaining, with the processor, distances to obstacles measured by the range finding system as the robot moves relative to the obstacles; monitoring, with the processor, the distance measurements; identifying, with the processor, outlier distance measurements in otherwise steadily fitting distance measurements; determining, with the processor, a depth of an obstacle based on the distance measurements; and determining, with the processor, a position of the obstacle based on the distance measurements.
Provided is an autonomous versatile robotic chassis, including: a chassis; a set of wheels coupled to the chassis; one or more motors to drive the set of wheels; at least one storage compartment within which one or more items for delivery are placed during transportation; a processor; one or more sensors; a camera; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: generating, with the processor, a map of an environment; localizing, with the processor, the robotic chassis; receiving, with the processor, a request for delivery of the one or more items to a first location; generating, with the processor, a movement path to the first location from a current location; and instructing, with the processor, the robotic chassis to transport the one or more items to the first location by navigating along the movement path.
B60P 3/20 - Vehicles adapted to transport, to carry or to comprise special loads or objects for transporting refrigerated goods
B60N 3/10 - Arrangements or adaptations of other passenger fittings, not otherwise provided for of receptacles for food or beverages, e.g. refrigerated
G06N 7/01 - Probabilistic graphical models, e.g. probabilistic networks
Provided is a machine-readable medium storing instructions that when executed by a processor effectuate operations including: receiving, with an application executed by a communication device, a first set of inputs including user data; generating, with the application, a three-dimensional model of the user based on the user data; receiving, with the application, a second set of inputs including a type of clothing garment; generating, with the application, a first set of clothing garments including clothing garments from a database of clothing garments that are the same type of clothing garment; generating, with the application, a second set of clothing garments from the first set of clothing garments based on the user data and one or more relationships between clothing attributes and human attributes; and presenting, with the application, the clothing garments from the second set of clothing garments virtually fitted on the three-dimensional model of the user.
Provided is a medium storing instructions that when executed by one or more processors effectuate operations including: obtaining a stream of spatial data indicative of a robot's position in a workspace; obtaining a stream of movement data indicative of the robot's displacement in the workspace; navigating along a path of the robot in the workspace based on the stream of spatial data; while navigating, mapping at least part of the workspace based on the stream of spatial data to form or update a spatial map in memory; wherein the spatial map expands as new areas of the workspace are covered by the robot and spatial data of the new areas of the workspace are obtained and used by the one or more processors to update the spatial map; and wherein the spatial map of the workspace is segmented into two or more zones.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/20 - Instruments for performing navigational calculations
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
H04N 23/00 - Cameras or camera modules comprising electronic image sensors; Control thereof
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 3/00 - Geometric image transformation in the plane of the image
A fleet of delivery robots, each including: a chassis; a storage compartment within which items are stored for transportation; a set of wheels coupled to the chassis; at least one sensor; a processor electronically coupled to the control system and the at least one sensor; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with the at least one sensor, data of an environment and data indicative of movement of the respective delivery robot; generating or updating, with the processor, a first map of the environment based on at least a portion of the captured data; inferring, with the processor, a current location of the respective delivery robot; and actuating, with the processor, the respective delivery robot to execute a delivery task including transportation of at least one item from a first location to a second location.
B62D 33/063 - Drivers' cabs movable from one position into at least one other position, e.g. tiltable, pivotable about a vertical axis, displaceable from one side of the vehicle to the other
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/02 - Control of position or course in two dimensions
G06T 7/55 - Depth or shape recovery from multiple images
Provided is provide a robotic device, including: a chassis; a set of wheels; a control system; a battery; one or more sensors; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with the one or more sensors, data of an environment of the robotic device and data indicative of movement of the robotic device; generating or updating, with the processor, a map of the environment based on at least a portion of the captured data; and generating or updating, with the processor, a movement path of the robotic device based on at least the map of the environment.
Provided is a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor of a robotic device effectuates operations including, receiving, by the processor, a sequence of one or more commands; executing, by the robotic device, the sequence of one or more commands; saving, by the processor, the sequence of one or more commands in memory after a predetermined amount of time from receiving a most recent one or more commands; and re-executing, by the robotic device, the saved sequence of one or more commands.
A mop module of a robot, including: a liquid reservoir for storing liquid; and an electronically-controlled liquid release mechanism; wherein: the electronically-controlled liquid release mechanism releases liquid from the liquid reservoir onto a work surface of the robot for mopping; a schedule for mopping at least one area is determined by a processor of the robot or based on user input provided to an application of a communication device paired with the robot; and a quantity of liquid released while the robot mops the at least one area is determined by the processor of the robot or based on user input provided to the application of the communication device paired with the robot.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a bumper apparatus of a robot, including: a bumper elastically coupled with a chassis of the robot; and at least one elastic element coupled to or interfacing with both the chassis and the bumper; wherein: the at least one elastic element facilitates movement of the bumper relative to the chassis upon impact with an object and disengagement from the object after impact; the at least one elastic element facilitates a return of the bumper to a neutral position upon disengaging from the object after impact; and the bumper covers at least a portion of the chassis.
B62D 24/04 - Vehicle body mounted on resilient suspension for movement relative to the vehicle frame
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
B62D 27/04 - Connections between superstructure sub-units resilient
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G05D 1/02 - Control of position or course in two dimensions
31.
Method for robotic devices to identify doorways using machine learning
A method for identifying a doorway, including receiving, with a processor of an automated mobile device, sensor data of an environment of the automated mobile device from one or more sensors coupled with the processor, wherein the sensor data is indicative of distances to objects within the environment; identifying, with the processor, a doorway in the environment based on the sensor data; marking, with the processor, the doorway in an indoor map of the environment; and instructing, with the processor, the automated mobile device to execute one or more actions upon identifying the doorway, wherein the one or more actions comprises finishing a first task in a first work area before crossing the identified doorway into a second work area to perform a second task.
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
Provided is a tangible, non-transitory, machine readable medium storing instructions that when executed by one or more processors of a robotic device effectuate operations including capturing, with a camera of the robotic device, spatial data of surroundings of the robotic device; generating, with the one or more processors of the robotic device, a movement path based on the spatial data of the surroundings; capturing, with at least one sensor of the robotic device, at least one measurement relative to the surroundings of the robotic device; obtaining, with the one or more processors of the robotic device, the at least one measurement; and inferring, with the one or more processors of the robotic device, a location of the robotic device based on the at least one measurement.
A robotic device, including a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor effectuates operations including: capturing, with the camera, one or more images of an environment of the robotic device; capturing, with the plurality of sensors, sensor data of the environment; generating or updating, with the processor, a map of the environment; identifying, with the processor, one or more rooms in the map; receiving, with the processor, one or more multidimensional arrays including at least one parameter that is used to identify a feature included in the one or more images; determining, with the processor, a position and orientation of the robotic device relative to the feature; and transmitting, with the processor, a signal to the processor of the controller to adjust a heading of the robotic device.
A retractable cable assembly in use with an electrical charger, power adapter, or other power supply. A cable wound on a spool disposed within a housing may be extracted by manually pulling on the cable or pressing of a release switch until the desired length of the cable is drawn. As the cable is drawn an engaged locking mechanism is used to keep the cable in place during and after extraction of the cable until which time retraction of the cable is desired. A retraction actuator disengages the locking mechanism, thereby freeing the cable and immediately retracting the cable within the housing.
Provided is a tangible, non-transitory, machine readable medium storing instructions that when executed by the image processor effectuates operations including: capturing, with a first image sensor, a first image of at least two light points projected on a surface by the at least one laser light emitter; extracting, with at least one image processor, a first distance between the at least two light points in the first image in a first direction; and estimating, with the at least one image processor, a first distance to the surface on which the at least two light points are projected based on at least the first distance between the at least two light points and a predetermined relationship relating a distance between at least two light points in the first direction and a distance to the surface on which the at least two light points are projected.
G01B 11/14 - Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
Provided is a robot, including: a plurality of sensors; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with an image sensor, images of a workspace as the robot moves within the workspace; identifying, with the processor, at least one characteristic of at least one object captured in the images of the workspace; determining, with the processor, an object type of the at least one object based on characteristics of different types of objects stored in an object dictionary; and instructing, with the processor, the robot to execute at least one action based on the object type of the at least one object.
Provided is a vibrating filter mechanism, including: a power source; a metal wire attached on both ends to the power source; a filter; a connector coupled with the filter and interfacing with the metal wire; and a first permanent magnet; wherein: the power source delivers electric current pulses in alternating directions through the metal wire; and the first permanent magnet is positioned in a location where a magnetic field of the first permanent magnet and a magnetic field of the metal wire interact and cause vibration of the metal wire and the coupled filter.
B01D 46/76 - Regeneration of the filtering material or filter elements inside the filter by forces created by movement of the filter element involving vibrations
B25J 11/00 - Manipulators not otherwise provided for
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a tangible, non-transitory, machine-readable medium storing instructions that when executed by a processor effectuate operations including: obtaining, with one or more rangefinder sensors positioned on a mobile automated device, distances from the one or more rangefinder sensors to a surface; monitoring, with the processor, the distances sensed by each of the one or more rangefinder sensors; detecting, with the processor, an edge when a change in the distances is greater than a predetermined amount; and actuating, with the processor, the mobile automated device to execute one or more movement patterns upon detecting the edge, wherein the one or more movement patterns initiates movement of the mobile automated device away from the area where the edge was detected.
B25J 11/00 - Manipulators not otherwise provided for
B25J 5/00 - Manipulators mounted on wheels or on carriages
A47L 9/04 - Nozzles with driven brushes or agitators
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
G01S 15/931 - Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
Provided is a system for robotic collaboration, including a first robotic chassis and a second robotic chassis each including a medium storing instructions that when executed by a respective processor effectuates operations including: capturing data of an environment and data indicative of movement; inferring locations visited up to a current location based on at least the data of the environment; and tracking areas cleaned based on the locations visited. The first robotic chassis performs a first part of a task and the second robotic chassis performs a second part of the task after the first robotic chassis completes the first part of the task.
B62D 33/063 - Drivers' cabs movable from one position into at least one other position, e.g. tiltable, pivotable about a vertical axis, displaceable from one side of the vehicle to the other
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
Provided is a method including emitting, with a laser light emitter disposed on a robot, a collimated laser beam projecting a light point on a surface opposite the laser light emitter; capturing, with each of at least two image sensors disposed on the robot, images of the projected light point; overlaying, with a processor of the robot, the images captured by the at least two image sensors to produce a superimposed image showing both captured images in a single image; determining, with the processor of the robot, a first distance between the projected light points in the superimposed image; and determining, with the processor, a second distance based on the first distance using a relationship that relates distance between light points with distance between the robot or a sensor thereof and the surface on which the collimated laser beam is projected.
A robot configured to perceive a model of an environment, including: a chassis; a set of wheels; a plurality of sensors; a processor; and memory storing instructions that when executed by the processor effectuates operations including: capturing a plurality of data while the robot moves within the environment; perceiving the model of the environment based on at least a portion of the plurality of data, the model being a top view of the environment; storing the model of the environment in a memory accessible to the processor; and transmitting the model of the environment and a status of the robot to an application of a smartphone previously paired with the robot.
G01S 17/48 - Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
Provided is a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor of a robot effectuates operations including: capturing, with at least one sensor, first data used in indicating a position of the robot; capturing, with at least one sensor, second data indicative of movement of the robot; recognizing, with the processor of the robot, a first area of the workspace based on at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area; actuating, with the processor of the robot, the robot to move along the first movement path; and generating, with the processor of the robot, a map of the workspace based on at least one of: the first data and the second data.
G05D 1/02 - Control of position or course in two dimensions
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
A method for determining at least one action of a robot, including capturing, with an image sensor disposed on the robot, images of objects within an environment of the robot as the robot moves within the environment; identifying, with a processor of the robot, at least one object based on the captured images; marking, with the processor, a location of the at least one object in a map of the environment; and actuating, with the processor, the robot to execute at least one action based on the at least one object identified.
Some aspects include a schedule development method for a robotic floor-cleaning device that recognizes patterns in user input to automatically devise a work schedule.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a robot including a chassis; a set of wheels coupled to the chassis; a plurality of sensors; a processor; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations. The operations include capturing, with an image sensor disposed on the robot, a plurality of images of an environment of the robot as the robot navigates within the environment; identifying, with the processor, an obstacle type of an obstacle captured in an image based on a comparison between features of the obstacle and features of obstacles with different obstacles types stored in a database; and determining, with the processor, an action of the robot based on the obstacle type of the obstacle.
A method for pairing a robotic device with an application of a communication device, including using, with the application, unique login information to log into the application; receiving, with the application, a SSID of a first Wi-Fi network to which the communication device is connected and a password for the first Wi-Fi network; entering, with the robotic device, a pairing mode upon the user pressing a button on a user interface of the robotic device or autonomously upon powering up for a first time; transmitting, with the application, the SSID and the password of the first Wi-Fi network to the robotic device; connecting, with the robotic device, the robotic device to the first Wi-Fi network using the SSID and the password of the first Wi-Fi network; receiving, with the application, information; and transmitting, with the application, at least some of the information to the robotic device.
H04W 76/11 - Allocation or use of connection identifiers
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
Provided is a system for robotic collaboration. A first robotic device includes a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor of the first robotic device effectuates first operations including: receiving first information from a processor of a second robotic device; actuating the first robotic device to execute a first action based on the first information; and transmitting second information to the processor of the second robotic device. The second robotic device includes a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor of the second robotic device effectuates second operations including: receiving the second information transmitted from the processor of the first robotic device; actuating the second robotic device to execute a second action based on the second information; and transmitting third information to the processor of the first robotic device.
Provided is a navigation system for a leader vehicle leading follower vehicles, including: the leader vehicle, configured to transmit, real-time movement data to follower vehicles; and, the follower vehicles, each comprising: a signal receiver for receiving the data from the leader vehicle; sensors configured to detect at least one maneuverability condition; a memory; a vehicle maneuver controller; a distance sensor; and a processor configured to: determine a route for navigating the local follower vehicle from an initial location; determine a preferred range of distances from the vehicle in front of the respective follower vehicle that the respective follower vehicle should stay within; determine a set of active maneuvering instructions for the respective follower vehicle based on at least a portion of the data received from the guiding vehicle; determine a lag in control commands; and, execute the set of active maneuvering instructions in the respective follower vehicle.
G08G 1/00 - Traffic control systems for road vehicles
G05D 1/02 - Control of position or course in two dimensions
B60W 30/165 - Control of distance between vehicles, e.g. keeping a distance to preceding vehicle automatically following the path of a preceding lead vehicle, e.g. "electronic tow-bar"
B60W 10/04 - Conjoint control of vehicle sub-units of different type or different function including control of propulsion units
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
50.
Artificial neural network based controlling of window shading system and method
Provided is a window shading system including a means for shading one or more windows; a means for manually controlling at least one window shading setting; one or more sensors; a processor; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including capturing, with the one or more sensors, environmental data of surroundings; predicting, with the processor, the at least one window shading setting using a learned function of an artificial neural network that relates the environmental data to the at least one window shading setting; and, applying, with the processor, the at least one window shading setting predicted to the window shading system.
Provided is a process that includes: obtaining a first version of a map of a workspace; selecting a first undiscovered area of the workspace; in response to selecting the first undiscovered area, causing the robot to move to a position and orientation to sense data in at least part of the first undiscovered area; and obtaining an updated version of the map mapping a larger area of the workspace than the first version.
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G05D 1/02 - Control of position or course in two dimensions
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01C 21/20 - Instruments for performing navigational calculations
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a robot including: a chassis; wheels; electric motors; a network card; sensors; a processor; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with at least one exteroceptive sensor, a first image and a second image; determining, with the processor, an overlapping area of the first image and the second image by comparing the raw pixel intensity values of the first image to the raw pixel intensity values of the second image; combining, with the processor, the first image and the second image at the overlapping area to generate a digital spatial representation of the environment; and estimating, with the processor using a statistical ensemble of simulated positions of the robot, a corrected position of the robot to replace a last known position of the robot within the digital spatial representation of the environment.
Some aspects include a schedule development method for a robotic floor-cleaning device that recognizes patterns in user input to automatically devise a work schedule.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
54.
Method for autonomously controlling speed of components and functions of a robot
Provided is a robot including main and peripheral brushes; a first actuator; a first sensor; one or more processors; and memory storing instructions that when executed by the one or more processors effectuate operations including: determining a first location of the robot in a working environment; obtaining, with the first sensor or another sensor, first data indicative of an environmental characteristic of the first location; adjusting a first operational parameter of the first actuator based on the sensed first data to cause the first operational parameter to be in a first adjusted state while the robot is at the first location; and forming or updating a debris map of the working environment based on data output by the first sensor or the another sensor configured to collect data indicative of an existence of debris on a floor of the working environment over at least one cleaning session.
G05D 1/02 - Control of position or course in two dimensions
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A47L 9/04 - Nozzles with driven brushes or agitators
B25J 11/00 - Manipulators not otherwise provided for
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided is a process executed by a robot, including: traversing, to a first position, a first distance in a backward direction; after traversing the first distance, rotating in a first rotation; after the first rotation, traversing, to a second position, a second distance in a third direction; after traversing the second distance, rotating 180 degrees in a second rotation such that the field of view of the sensor points in a fourth direction; after the second rotation, traversing, to a third position, a third distance in the fourth direction; after traversing the second distance, rotating 180 degrees in a third rotation such that the field of view of the sensor points in the third direction; and after the third rotation, traversing, to a fourth position, a fourth distance in the third direction.
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/55 - Depth or shape recovery from multiple images
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 3/00 - Geometric image transformation in the plane of the image
56.
Method for automatically removing obstructions from robotic floor-cleaning devices
Some embodiments include a robot, including: a plurality of sensors; at least one encoder; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: measuring, with the at least one encoder, wheel rotation of at least one wheel; capturing, with an image sensor, images of an environment as the robot moves within the environment; identifying, with the processor, at least one characteristic of at least one object captured in the images of the environment; determining, with the processor, an object type of the at least one object based on characteristics of different types of objects stored in an object database; and instructing, with the processor, the robot to execute at least one action based on at least one of: the object type of the at least one object and the measured wheel rotation of the at least one wheel.
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
G06K 9/62 - Methods or arrangements for recognition using electronic means
G05D 1/02 - Control of position or course in two dimensions
G06V 10/40 - Extraction of image or video features
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
57.
Method and apparatus for combining data to construct a floor plan
A robot adapted to capture a plurality of data; perceive a model of the environment based on the plurality of data; determine areas within which work was performed and areas within which work is yet to be performed; store the model of the environment in a memory accessible to the processor; and transmit the model of the environment and a status of the robot to an application of a smartphone previously paired with the robot.
G01S 17/48 - Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
A media storing instructions that when executed by a processor of a robot effectuates operations including detecting an object in a line of sight of at least one sensor; adjusting a current path of the robot to include a detour path around the object, instructing the robot to resume along the current path after avoiding the object, discounting areas of overlap from a total area covered based on at least some data collected by sensors, inferring previously visited areas and unvisited areas, generating a planar representation of a workspace of the robot by stitching data collected by at least some sensors of the robot at overlapping points, and transmitting the planar representation and coverage statistics to an application of a communication device configured to display the information.
A method for a robot to autonomously plan a navigational route and work duties in an environment of the robot including accessing historical sensor data stored from prior work cycles, determining the navigational route and work duties of the robot by processing probabilities based on the historical sensor data, enacting the navigational route and work duties by the robot, capturing new sensor data while the robot enacts the navigational route and work duties, processing the new sensor data, and altering the navigational route and work duties based on the new sensor data processed.
A mop module of a robot, including: a liquid reservoir for storing liquid; and an electronically-controlled liquid release mechanism; wherein: the electronically-controlled liquid release mechanism releases liquid from the liquid reservoir for mopping a work surface; operation and a schedule of operation of the electronically-controlled liquid release mechanism in at least one area is controlled by a processor of the robot within which the mop module is installed or based on input provided to an application of a communication device paired with the robot; a liquid flow rate depends on at least an amount of power delivered to the electronically-controlled liquid release mechanism; and the liquid flow rate for the at least one area is determined by the processor of the robot within which the mop module is installed or an input provided to the application of the communication device paired with the robot.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
A47L 13/58 - Wringers for scouring pads, mops, or the like, combined with buckets
Provided is a robotic device, including: a chassis; a set of wheels; one or more motors to drive the set of wheels; a suspension system; a controller in communication with the one or more motors; at least one sensor; a camera; one or more processors; a tangible, non-transitory, machine readable medium storing instructions that when executed by the one or more processors effectuate operations including: capturing, with the camera, spatial data of surroundings; generating, with the one or more processors, a spatial model of the surroundings based on the spatial data; generating, with the one or more processors, a movement path based on the spatial model of the surroundings; inferring, with the one or more processors, a location of the robotic device; and updating, with the one or more processors, the movement path to exclude locations of the movement path that the robotic device has previously been located.
A method for identifying objects for autonomous robots, including: capturing, with an image sensor disposed on an autonomous robot, images of a workspace, wherein a field of view of the image sensor captures at least an area in front of the autonomous robot; obtaining, with a processing unit disposed on the autonomous robot, the images; generating, with the processing unit, a feature vector from the images; comparing, with the processing unit, at least one object captured in the images to objects in an object dictionary; identifying, with the processing unit, a class to which the at least one object belongs; and executing, with the autonomous robot, instructions based on the class of the at least one object identified.
Provided is provide a robotic device, including: a chassis; a set of wheels; a control system; a battery; one or more sensors; a processor; a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with the one or more sensors, data of an environment of the robotic device and data indicative of movement of the robotic device; generating or updating, with the processor, a map of the environment based on at least a portion of the captured data; inferring, with the one or more processors of the robotic device, a current location of the robotic device, and generating or updating, with the processor, a movement path of the robotic device based on at least the map of the environment, at least a portion of the captured data, and the inferred current location of the robotic device.
Some aspects include a schedule development method for a robotic floor-cleaning device that recognizes patterns in user input to automatically devise a work schedule.
G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
G05B 19/042 - Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
Provided are operations including: receiving, with one or more processors of a robot, an image of an environment from an imaging device separate from the robot; obtaining, with the one or more processors, raw pixel intensity values of the image; extracting, with the one or more processors, objects and features in the image by grouping pixels with similar raw pixel intensity values, and by identifying areas in the image with greatest change in raw pixel intensity values; determining, with the one or more processors, an area within a map of the environment corresponding with the image by comparing the objects and features of the image with objects and features of the map; and, inferring, with the one or more processors, one or more locations captured in the image based on the location of the area of the map corresponding with the image.
Provided is an integrated bumper system including an integrated bumper system of a robot, including a bumper elastically coupled with a chassis of the robot and comprising an opening in a top surface of the bumper; at least one elastic element coupled to the chassis and interfacing with the bumper; and a bridging element coupled with the chassis or another component of the robot, wherein the bumper slides relative to the bridging element upon an impact and a release of the impact; a perimeter of the opening of the bumper is hidden beneath the bridging element when the bumper is in a neutral position and an impacted position; and the perimeter of the opening of the bumper remains hidden beneath the bridging element during movement of the bumper caused by the impact and the release of the impact.
B62D 24/04 - Vehicle body mounted on resilient suspension for movement relative to the vehicle frame
A47L 9/00 - DOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
B62D 27/04 - Connections between superstructure sub-units resilient
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G05D 1/02 - Control of position or course in two dimensions
67.
Method for constructing a map while performing work
Provided is a method including: capturing first data indicative of the position of the robot in relation to objects within the workspace and second data indicative of movement of the robot; recognizing, with a processor of the robot, a first area of the workspace based on at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, at least part of a map of the workspace based on at least one of: the first part of the first data and the first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area; actuating, with the processor of the robot, the robot to move along the first movement path.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/55 - Depth or shape recovery from multiple images
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/20 - Instruments for performing navigational calculations
Provided is a robot, including: a chassis; a set of wheels coupled to the chassis; a processor; and a tangible, non-transitory, machine-readable medium storing instructions that when executed by the processor effectuate operations including: capturing, by an image sensor disposed on a robot, images of a workspace; obtaining, by the processor of the robot or via the cloud, the captured images; comparing, by the processor of the robot or via the cloud, at least one object from the captured images to objects in an object dictionary; identifying, by the processor of the robot or via the cloud, a class to which the at least one object belongs using an object classification unit; and instructing, by the processor of the robot, the robot to execute at least one action based on the object class identified.
A system for robotic collaboration, including: a first robotic chassis and a second robotic chassis, each including wheels; a control system; a power supply; at least one sensor; a processor, and a medium storing instructions that when executed by the respective processor effectuates operations including: capturing data of an environment and data indicative of movement; generating a map of the environment based on at least some of the captured data; inferring a current location of the respective robotic chassis based on at least some of the captured data; and executing a portion of a task, the second robotic chassis executing a second part of the task after the first robotic chassis completes a first part of the task.
B62D 33/063 - Drivers' cabs movable from one position into at least one other position, e.g. tiltable, pivotable about a vertical axis, displaceable from one side of the vehicle to the other
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
70.
Method and system for collaborative construction of a map
Methods and systems for collaboratively constructing a map of an environment. One or more sensory devices installed on an autonomous vehicle take readings within a field of view of the sensory device. As the vehicle moves within the environment, the sensory device continuously takes readings within new fields of view. At the same time, sensory devices installed on other autonomous vehicles operating within the same environment and/or fixed devices monitoring the environment take readings within their respective fields of view. The readings recorded by a processor of each autonomous vehicle may be shared with all other processors of autonomous vehicles operating within the same environment with whom a data transfer channel is established. Processors combine overlapping readings to construct continuously growing segments of the map. Combined readings are taken by the same sensory device or by different sensory devices and are taken at the same time or at different times.
G06T 7/174 - Segmentation; Edge detection involving the use of two or more images
G01S 1/00 - Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith
71.
METHOD OF LIGHTWEIGHT SIMULTANEOUS LOCALIZATION AND MAPPING PERFORMED ON A REAL-TIME COMPUTING AND BATTERY OPERATED WHEELED DEVICE
Some aspects include a method for operating a wheeled device, including: capturing, by a primary sensor coupled to the wheeled device, primary sensor data indicative of a plurality of radial distances to objects; transforming, by a processor of the wheeled device, the plurality of radial distances from a perspective of the primary sensor to a perspective of the wheeled device; generating, by the processor, a partial map of visible areas in real-time at a first position of the wheeled device based on the primary sensor data and some secondary sensor data, wherein: the partial map is a bird's eye view; and the processor iteratively completes a full map of the environment based on new sensor data captured by sensors as the wheeled device performs work within the environment and new areas become visible to the sensors; and executing, by the wheeled device, a movement path to a second position.
G05D 1/02 - Control of position or course in two dimensions
G01C 21/12 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning
72.
Surface coverage optimization method for mobile robotic devices
A method for covering a surface by a robotic device including: generating a two-dimensional map of a workspace using data from at least a depth measurement device positioned on the robotic device, dividing the two-dimensional map into a grid of cells, identifying the cells as free, occupied, or unknown, localizing the robotic device within the two-dimensional map, identifying at least one frontier within the map for exploration, generating a spanning tree such that a movement path of the robotic device includes a repetition of movement in a first direction along a straight line, 180 degree rotation over a distance perpendicular to the first direction, movement in a second direction opposite the first direction along a straight line, and 180 degree rotation over a distance perpendicular to the second direction, and recording the number of collisions incurred and the areas covered by the robotic device while executing the movement path.
Provided is a first robot including: a machine readable medium storing instructions that when executed by the processor of the first robot effectuates operations including: executing, with the processor of the first robot, a task; and transmitting, with the processor of the first robot, a signal to a processor of a second robot during execution of the task when its power supply level reduces below a predetermined threshold; and the second robot including: a machine readable medium storing instructions that when executed by the processor of the second robot effectuates operations including: executing, with the processor of the second robot, the remainder of the task upon receiving the signal transmitted from the processor of the first robot; and wherein the first robot navigates to a charging station when its power supply level reduces below the predetermined threshold and wherein the first robot and second robot provide the same services.
B62D 33/063 - Drivers' cabs movable from one position into at least one other position, e.g. tiltable, pivotable about a vertical axis, displaceable from one side of the vehicle to the other
G05D 1/02 - Control of position or course in two dimensions
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
74.
Method and apparatus for combining data to construct a floor plan
A robot for perceiving a spatial representation of an environment, including: an actuator, at least one sensor, a processor, and memory storing instructions that when executed by the processor effectuates operations including: capturing a plurality of data by the at least one sensor of the robot, wherein: the plurality of data comprises first data comprising pixel characteristics indicative of features of the environment and second data indicative of depth to objects in the environment; the plurality of data is captured from different positions within the environment through which the robot moves, the plurality of data corresponding with respective positions from which the plurality of data was captured; and the plurality of data captured from different respective positions within the environment corresponds to respective fields of view; and aligning the plurality of data as it is captured to more accurately perceive the spatial representation of the environment.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
Methods for utilizing virtual boundaries with robotic devices are presented including: positioning a boundary component having a receiver pair to receive a first robotic device signal substantially simultaneously by each receiver of the receiver pair from a robotic device only when the robotic device is positioned along a virtual boundary; operating the robotic device to move automatically within an area co-located with the virtual boundary; transmitting the first robotic device signal by the robotic device; and receiving the first robotic device signal by the receiver pair thereby indicating that the robotic device is positioned along the virtual boundary.
Provided is a wheeled device, including: a chassis; a set of wheels coupled to the chassis; one or more electric motors to rotate the set of wheels; a network card for wireless connection to the internet; a plurality of sensors; a processor electronically coupled to the plurality of sensors; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with at least one exteroceptive sensor, measurement readings of the environment; and estimating, with the processor using a statistical ensemble of simulated positions of the wheeled device and the measurement readings, a corrected position of the wheeled device to replace a last known position of the wheeled device.
Provided is a robot, including: a chassis; a set of wheels coupled to the chassis; at least one motor; at least one motor controller; a range finding system; a plurality of sensors; a processor; and a medium storing instructions that when executed by the processor effectuates operations including: measuring, with the range finding system, distances to surfaces opposite the range finding system as the robot moves relative to the surfaces; monitoring, with the processor, the distance measurements taken by the range finding system; discarding, with the processor, outlier distance measurements that reflect an interruption in otherwise steadily fitting distance measurements taken as the robot moves towards and away from surrounding obstacles; and determining, with the processor, a position of an obstacle by identifying a position of the range finding system immediately before and after encountering the obstacle, signified by the interruption detected in the distance measurements.
Provided is a robotic device, including: a chassis; a set of wheels; one or more motors to drive the set of wheels; a controller in communication with the one or more motors; one or more surface cleaning tools; at least one sensor; a camera; one or more processors; a medium storing instructions that when executed by the one or more processors effectuate operations including: capturing, with the camera of the robotic device, spatial data of surroundings of the robotic device; generating, with the one or more processors of the robotic device, a movement path based on the spatial data of the surroundings; inferring, with the one or more processors of the robotic device, a location of the robotic device; and updating, with the one or more processors of the robotic device, the movement path to exclude locations of the movement path that the robotic device has previously been located.
Provided is a machine-readable medium storing instructions that when executed by a processor effectuate operations including: receiving, with an application executed by a communication device, a first set of inputs including user data; generating, with the application, a three-dimensional model of the user based on the user data; receiving, with the application, a second set of inputs including a type of clothing garment; generating, with the application, a first set of clothing garments including clothing garments from a database of clothing garments that are the same type of clothing garment; generating, with the application, a second set of clothing garments from the first set of clothing garments based on the user data and one or more relationships between clothing attributes and human attributes; and presenting, with the application, the clothing garments from the second set of clothing garments virtually fitted on the three-dimensional model of the user.
Provided is a method including: capturing, with at least one sensor of a robot, first data indicative of the position of the robot in relation to objects within the workspace and second data indicative of movement of the robot; recognizing, with a processor of the robot, a first area of the workspace based on observing at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, at least part of a map of the workspace based on at least one of: the first part of the first data and the first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area; actuating, with the processor of the robot, the robot to move along the first movement path.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/20 - Instruments for performing navigational calculations
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 3/00 - Geometric image transformation in the plane of the image
Provided is a method for operating a robot, including: capturing images of a workspace; capturing movement data indicative of movement of the robot; capturing LIDAR data as the robot performs work within the workspace; comparing at least one object from the captured images to objects in an object dictionary; identifying a class to which the at least one object belongs; generating a first iteration of a map of the workspace based on the LIDAR data; generating additional iterations of the map based on newly captured LIDAR data and newly captured movement data; actuating the robot to drive along a trajectory that follows along a planned path by providing pulses to one or more electric motors of wheels of the robot; and localizing the robot within an iteration of the map by estimating a position of the robot based on the movement data, slippage, and sensor errors.
Provided is a robotic device including a medium storing instructions that when executed by one or more processors effectuate operations including: capturing, with a camera, spatial data of surroundings; generating, with the one or more processors, a movement path based on the spatial data; predicting, with the one or more processors, a new predicted state of the robotic device including at least a predicted position of the robotic device, wherein predicting the new predicted state includes: capturing, with at least one sensor, movement readings of the robotic device; predicting, with the one or more processors, the new predicted state using a motion model of the robotic device based on a previous predicted state of the robotic device and the movement readings; and updating, with the one or more processors, the movement path to exclude locations of the movement path that the robotic device has previously been predicted to be positioned.
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
83.
Method for tracking movement of a mobile robotic device
Provided is a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing visual readings to objects within an environment; capturing readings of wheel rotation; capturing readings of a driving surface; capturing distances to obstacles; determining displacement of the robotic device in two dimensions based on sensor readings of the driving surface; estimating, with the processor, a corrected position of the robotic device to replace a last known position of the robotic device; determining a most feasible element in an ensemble based on the visual readings; and determining a most feasible position of the robotic device as the corrected position based on the most feasible element in the ensemble and the visual readings.
Methods for detecting an alignment of a robot with a virtual line, including: transmitting, with at least one transmitter of the robot, a first signal; receiving, with a first receiver and a second receiver of a device, the first signal; detecting, with the device, that the robot is aligned with the virtual line when the first receiver and the second receiver of the device simultaneously receive the first signal; transmitting, with at least one transmitter of the device, a second signal indicating that the robot is aligned with the virtual line; receiving, with at least one receiver of the robot, the second signal; actuating, with a processor of the robot, the robot to execute a movement upon receiving the second signal; and marking, with the processor of the robot, the location of the device in a map of an environment of the robot.
Some aspects include a method for operating a cleaning robot, including: capturing LIDAR data; generating a first iteration of a map of the environment in real time; capturing sensor data from different positions within the environment; capturing movement data indicative of movement of the cleaning robot; aligning and integrating newly captured LIDAR data with previously captured LIDAR data at overlapping points; generating additional iterations of the map based on the newly captured LIDAR data and at least some of the newly captured sensor data; localizing the cleaning robot; planning a path of the cleaning robot; and actuating the cleaning robot to drive along a trajectory that follows along the planned path by providing pulses to one or more electric motors of wheels of the cleaning robot.
An autonomous mobile robotic device that may carry and transport one or more items within an environment. The robotic device may comprise a platform on which the one or more items may be placed. The robotic device may pick up, deliver, distribution and/or transport the one or more items to one or more locations. The robotic device may be provided with scheduling information for task execution or for pick up, delivery, distribution and/or transportation of one or more items. Once tasks are complete, the robotic device may autonomously navigate to a storage location.
Provided is a method including: capturing, with at least one sensor of a robot, first data indicative of the position of the robot in relation to objects within the workspace and second data indicative of movement of the robot; recognizing, with a processor of the robot, a first area of the workspace based on observing at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, at least part of a map of the workspace based on at least one of: the first part of the first data and the first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area; actuating, with the processor of the robot, the robot to move along the first movement path.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/20 - Instruments for performing navigational calculations
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 7/30 - Determination of transform parameters for the alignment of images, i.e. image registration
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 3/00 - Geometric image transformation in the plane of the image
Provided is a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor effectuates operations including: capturing, with at least one exteroceptive sensor, readings of an environment and capturing, with at least one proprioceptive sensor, readings indicative of displacement of a wheeled device; estimating, with the processor using an ensemble of simulated positions of possible new locations of the wheeled device, the readings of the environment, and the readings indicative of displacement, a corrected position of the wheeled device to replace a last known position of the wheeled device; determining, by the processor using the readings of the exteroceptive sensor, a most feasible position of the wheeled device as the corrected position; and, transmitting, by the processor, status information of tasks performed by the wheeled device to an external processor, wherein the status information initiates a second wheeled device to perform a second task.
Systems and methods for sending scheduling information to a mobile robotic device from an application of a communication device. The application of the communication device generates at least one scheduling command and transmits the at least one scheduling command to a router using a first wireless communication channel. The router is configured to transmit and receive the at least one scheduling command to and from at least one cloud service. A charging station of the robotic device receives the at least one scheduling command from the router using the first wireless communication channel and stores the at least one scheduling command on the charging station. The charging station transmits the at least one scheduling command to a processor of the robotic device using a second wireless communication channel and the processor of the robotic device modifies its scheduling information based on the at least one scheduling command.
A retractable cable assembly in use with an electrical charger, power adapter, or other power supply. A cable wound on a spool disposed within a housing may be extracted by manually pulling on the cable or pressing of a release switch until the desired length of the cable is drawn. As the cable is drawn an engaged locking mechanism is used to keep the cable in place during and after extraction of the cable until which time retraction of the cable is desired. Rotation or twisting of at least a portion of the housing disengages the locking mechanism, thereby freeing the cable and immediately retracting the cable within the housing.
Provided is a robotic towing device including: a mobile robotic chassis; a set of wheels coupled to the mobile robotic chassis; one or more motors; one or more processors; a casing coupled to the mobile robotic chassis; one or more arms coupled to the mobile robotic chassis on a first end; one or more lifts, each of the one or more lifts corresponding and coupled to one of the one or more arms on a second end; and one or more lift wheels, each of the one or more lift wheels corresponding and coupled to a terminal end of one of the one or more lifts.
B60P 3/07 - Vehicles adapted to transport, to carry or to comprise special loads or objects for carrying vehicles for carrying road vehicles
B62D 61/10 - Motor vehicles or trailers, characterised by the arrangement or number of wheels, not otherwise provided for, e.g. four wheels in diamond pattern with more than four wheels
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
92.
Method and apparatus for overexposing images captured by drones
Provided is a method for overexposing images captured by a camera of a camera carrying device, including: providing a camera disabling apparatus within an environment, including: a housing; a camera disposed within the housing; a movable high power light source; a motor coupled to the high power light source; and a processor for detecting the camera carrying device in captured images of the environment; capturing, with the camera, an image of the environment; detecting, with the processor, the camera carrying device in the captured image; activating, with the processor, a light beam of the high power light source when the camera carrying device is detected in the captured image; and actuating, with the processor, the motor to direct the light beam of the high power light source towards the camera carrying device such that images captured by the camera of the camera carrying device are overexposed.
Included is a method for autonomous robotic refuse container replacement including: transmitting, by a processor of a first robotic refuse container, a request for replacement to a portion of processors of robotic refuse containers; receiving, by the processor of the first robotic refuse container, a return signal from a portion of processors of the robotic refuse containers; transmitting, by the processor of the first robotic refuse container, a confirmation for replacement to a processor of a second robotic refuse container in response to a return signal received from the processor of the second robotic refuse container; instructing, by the processor of the first robotic refuse container, the first robotic refuse container to navigate to a second location from a current location; and instructing, by the processor of the second robotic refuse container, the second robotic refuse container to navigate to the current location of the first robotic refuse container.
Included is a refuse bag replacement method including: detecting, by one or more sensors of the robotic refuse container, a refuse bag fill level; instructing, by a processor of the robotic refuse container, the robotic refuse container to navigate to a refuse collection site upon detecting a predetermined refuse bag fill level; and instructing, by the processor of the robotic refuse container, the robotic refuse container to discard a refuse bag housed within the robotic refuse container at the refuse collection site.
Provided is an autonomous versatile robotic chassis, including: a chassis; a set of wheels coupled to the chassis; one or more motors to drive the set of wheels; one or more mounting elements; at least one food equipment coupled to the robotic chassis using the one or more mounting elements; a processor; one or more sensors; a camera; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: generating, with the processor, a map of an environment; localizing, with the processor, the robotic chassis; receiving, with the processor, a request for delivery of a food item to a first location; generating, with the processor, a movement path to the first location from a current location; and instructing, with the processor, the robotic chassis to transport the food item to the first location by navigating along the movement path.
G06N 7/00 - Computing arrangements based on specific mathematical models
B60N 3/10 - Arrangements or adaptations of other passenger fittings, not otherwise provided for of receptacles for food or beverages, e.g. refrigerated
96.
Efficient coverage planning of mobile robotic devices
Provided is a robot-implemented process to create a coverage plan for a work environment, including obtaining, with a robotic device, raw data values of a work environment pertaining to likelihood of operational success and presence or absence of operational hazards contained within the work environment; determining, with one or more processors, the most efficient coverage plan for the robotic device based on the raw data values; and enacting the coverage plan based on the values from the data.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
Some aspects include a method including: generating, with a processor of the robot, a map of the workspace; segmenting, with the processor of the robot, the map into a plurality of zones; transmitting, with the processor of the robot, the map to an application of a communication device; receiving, with the application, the map; displaying, with the application, the map; receiving, with the application, at least one input for the map; implementing, with the application, the at least one input into the map to generate an updated map of the workspace; transmitting, with the application, the updated map to the processor of the robot; receiving, with the processor of the robot, the updated map; generating, with the processor of the robot, a movement path based on the map or the updated map; and actuating, with the processor of the robot, the robot to traverse the movement path.
Provided is an autonomous mobile robotic device that may carry, transport, and deliver one or more items in a work environment to predetermined destinations. The robotic device may comprise a container in which the one or more items may be placed. Once tasks are complete, the robotic device may autonomously navigate to a predetermined location.
Included is a method for collaboration between a first robotic chassis and a second robotic chassis, including: executing, with the processor of the first robotic chassis, a first part of a task; transmitting, with the processor of the first robotic chassis, a signal to a processor of the second robotic chassis upon completion of the first part of the task; and executing, with the processor of the second robotic chassis, a second part of the task upon receiving the signal transmitted from the processor of the first robotic chassis; wherein the first robotic chassis and the second robotic chassis provide differing services.
B62D 33/063 - Drivers' cabs movable from one position into at least one other position, e.g. tiltable, pivotable about a vertical axis, displaceable from one side of the vehicle to the other
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/02 - Control of position or course in two dimensions
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot