update web docs
This commit is contained in:
Родитель
2616a06f2b
Коммит
a71165d4ae
|
@ -58,7 +58,7 @@
|
|||
<li><a href="#labanotation-suite">Labanotation Suite</a></li>
|
||||
<li><a href="#gesturebot-design-kit">gestureBot Design Kit</a></li>
|
||||
<li class="main "><a href="#navigation">Navigation</a></li>
|
||||
<li><a href="#hololensnavigation-self-calibrating-indoor-navigation">HololensNavigation: Self-calibrating Indoor Navigation</a></li>
|
||||
<li><a href="#hololensnavigation">HoloLensNavigation</a></li>
|
||||
<li class="main "><a href="#manipulation">Manipulation</a></li>
|
||||
</ul>
|
||||
</div></div>
|
||||
|
@ -74,23 +74,33 @@
|
|||
<h3 id="labanotation-suite"><a href="https://github.com/microsoft/LabanotationSuite">Labanotation Suite</a></h3>
|
||||
<p>The Labanotation Suite is a collection of applications comprising a system that can be used to give service robots the ability to move in natural and meaningful ways. It includes software tools, source code, sample data, and hardware simulation software that supports experimentation with the concepts presented in the paper <strong><a href="https://link.springer.com/article/10.1007%2Fs11263-018-1123-1">Describing Upper-Body Motions Based on Labanotation for Learning-from-Observation Robots</a> (International Journal of Computer Vision, December 2018)</strong>. The system consists of compiled gesture capture applications using the Microsoft Kinect sensor device and a Windows 10 PC. Editing tools constructed in Python provide gesture trimming and movement analysis options to identify key points of movment. Output is graphical Labanotation scores as well as movement data expressed in Labanotation and stored in JSON data format. </p>
|
||||
<h4 id="kinectreader-human-gesture-capture-tool"><strong><a href="https://github.com/microsoft/LabanotationSuite/tree/master/GestureAuthoringTools/KinectReader">KinectReader: </a> Human Gesture Capture Tool</strong></h4>
|
||||
<p>A compiled Windows application that connects to a Kinect sensor device and provides a user interface for capturing and storing gestures performed by human subjects. It's primary output data is human stick-figure joint positions in a CSV format, but can also capture corresponding RGB video and audio at the same time.</p>
|
||||
<p>KinectReader is a compiled Windows application that connects to a Kinect sensor device and provides a user interface for capturing and storing gestures performed by human subjects. It's primary output data is human stick-figure joint positions in a CSV format, but can also capture corresponding RGB video and audio at the same time.</p>
|
||||
<h4 id="kinectcaptureeditor-human-gesture-trimming-tool"><strong><a href="https://github.com/microsoft/LabanotationSuite/tree/master/GestureAuthoringTools/KinectCaptureEditor">KinectCaptureEditor: </a> Human Gesture Trimming Tool</strong></h4>
|
||||
<p>A compiled Windows application that loads human joint position CSV files produced by the KinectReader or other tools, as well as optional corresponding video and audio files. It provides a timeline-based method to trim audio and video joint movement sequences into representative human gestures.</p>
|
||||
<p>KinectCaptureEditor is a compiled Windows application that loads human joint position CSV files produced by the KinectReader or other tools, as well as optional corresponding video and audio files. It provides a timeline-based method to trim audio and video joint movement sequences into representative human gestures.</p>
|
||||
<h4 id="labaneditor-gesture-analysis-and-labanotation-generator"><strong><a href="https://github.com/microsoft/LabanotationSuite/tree/master/GestureAuthoringTools/LabanEditor">LabanEditor: </a> Gesture Analysis and Labanotation Generator</strong></h4>
|
||||
<p>A Python application that loads a Kinect joint CSV file representing a human gesture, provides algorithmic options for automatically extracting keyframes from the gesture that correspond Labanotation data, and provides a graphical user interface for selection and modification of the extracted keyframes. Additionally, it saves the resulting gesture data in a JSON file format suitable for controlling robots running a gesture interpretation driver, as well as .png graphic file renderings of the charts and diagrams used in the interface.</p>
|
||||
<p>LabanEditor is a Python application that loads a Kinect joint CSV file representing a human gesture, provides algorithmic options for automatically extracting keyframes from the gesture that correspond Labanotation data, and provides a graphical user interface for selection and modification of the extracted keyframes. Additionally, it saves the resulting gesture data in a JSON file format suitable for controlling robots running a gesture interpretation driver, as well as PNG graphic file renderings of the charts and diagrams used in the interface.</p>
|
||||
<h4 id="msrabotsimulation-gesture-performance-with-simulated-robot"><strong><a href="https://github.com/microsoft/LabanotationSuite/tree/master/MSRAbotSimulation">MSRAbotSimulation: </a> Gesture Performance with Simulated Robot</strong></h4>
|
||||
<p>This Python and browser-based simulation software uses javascript and html code to implement an animated 3D model of the robot and a user interface for selecting and rendering gestures described in the JSON format. A temporary local HTTP server invoked with python or an existing server can be used to host the software and the simulation is run within a modern web browser. The user can choose from a collection of sample gestures, or select a new gesture captured and created using this project's Gesture Authoring Tools.</p>
|
||||
<h3 id="gesturebot-design-kit"><a href="https://github.com/microsoft/gestureBotDesignKit">gestureBot Design Kit</a></h3>
|
||||
<p>With a Windows 10 PC, and optionally a 3D-printer and about $350(USD) in electronic servos and parts, the gestureBot Design Kit repository contains all the information needed to build both a virtual and physical desktop companion robot. It includes browser-based simulation and control software based on the <a href="https://www.robotis.us/dynamixel-xl-320/">Robotis XL</a> series of servo motors. To construct a physical robot, it provides models for 3D-printable body-parts, a parts-list for electronic components, and step-by-step assembly instructions. No soldering is required, but some manual skill is needed to mate small electronic connectors, as well as manipulate small plastic rivets and miniature metal screws to assemble the body components.</p>
|
||||
<h4 id="gesture-library-example-set-of-upper-torso-gestures"><strong><a href="https://github.com/microsoft/gestureBotDesignKit/tree/main/src/Labanotation">Gesture Library: </a> Example Set of Upper-Torso Gestures</strong></h4>
|
||||
<p>The Gesture Library is a data-set of upper-torso gesture-concept pairs expressed in Labanotation format and stored as JSON files. The data is directly accessed by the Gesture Engine. The library includes a complete listing of the sample database including a video clip of each gesture performed by the gestureBot.</p>
|
||||
<h4 id="gesture-service-example-gesture-service-engine"><strong><a href="https://github.com/microsoft/gestureBotDesignKit/tree/main/src/Samples/gestureService_w2v">Gesture Service: </a> Example Gesture Service Engine</strong></h4>
|
||||
<p>The Gesture Library is a data-set of upper-torso gesture-concept pairs expressed in Labanotation format and stored as JSON files. The data is directly accessed by the Gesture Service and is organized around 40 clusters of gesture-concept pairs including 6 deictic concepts (me, you, this, that, here, there), 33 expressive theme concepts (hello, many, question, etc.), and 1 "beat" concept used for idling. The clusters were segregated using a method described in the paper: <a href="https://hal.archives-ouvertes.fr/hal-03108169"><strong><em>Development and Verification of a Gesture-generating Architecture for Conversational Humanoid Robots:</em></strong> </a>. The library includes a complete listing of the sample data including a video clip of each gesture performed by the gestureBot.</p>
|
||||
<h4 id="gesture-service-example-gesture-service-engine"><strong><a href="https://github.com/microsoft/gestureBotDesignKit/tree/main/src/Samples/gestureService_w2v">Gesture Service: </a> Example Gesture Service Engine</strong></h4>
|
||||
<p>The Gesture Service project is a software module constructed with Python and Google's neural network <a href="https://code.google.com/archive/p/word2vec/#!">word2vec</a> that takes a text phrase as input and returns a corresponding gesture.</p>
|
||||
<h2 id="navigation">Navigation</h2>
|
||||
<p>The field of robot navigation includes systems and methods such as simultaneous-location-and-mapping (SLAM), path planning, and map management.</p>
|
||||
<h4 id="hololensnavigation-self-calibrating-indoor-navigation"><strong><a href="https://github.com/microsoft/HololensNavigation">HololensNavigation: </a>Self-calibrating Indoor Navigation</strong></h4>
|
||||
<p>This project shows how a <a href="https://www.microsoft.com/en-us/hololens">Hololens</a> device can be placed on the head of <a href="https://us.softbankrobotics.com/pepper">Pepper robot</a> and provide a self-calibrating indoor navigation solution within a single room.</p>
|
||||
<h3 id="hololensnavigation"><strong><a href="https://github.com/microsoft/HololensNavigation">HoloLensNavigation</a></strong></h3>
|
||||
<p>The HoloLensNavigation system shows how a <a href="https://www.microsoft.com/en-us/hololens">HoloLens</a> device can be placed on the head of <a href="https://us.softbankrobotics.com/pepper">Pepper robot</a> and provide a self-calibrating indoor navigation solution within a single room. It operates in one of three modes: map generation, position calibration, and navigation.</p>
|
||||
<h4 id="hololensspatialmapping-dynamic-spatial-mapping"><strong><a href="https://github.com/microsoft/HololensNavigation">HoloLensSpatialMapping: </a> Dynamic Spatial Mapping</strong></h4>
|
||||
<p>HololensSpatialMapping is a UWP application that uses the HoloLens device sensors to capture and maintain a spatial map of the immediate environment and also communicates with HoloROS Bridge.</p>
|
||||
<h4 id="hololens_localization-local-position-calibration-and-computation"><strong><a href="https://github.com/microsoft/HololensNavigation/tree/main/linux/HoloLens_Localization">HoloLens_Localization: </a> Local Position Calibration and Computation</strong></h4>
|
||||
<p>HoloLens_Localization is a ROS (Melodic) package that computes the local position of the robot based on sensor measurements as the robot moves through calibrated poses and navigates through the environment.</p>
|
||||
<h4 id="holorosbridge-ros-communication-with-hololens-device"><strong><a href="https://github.com/microsoft/HololensNavigation/tree/main/linux/HoloROSBridge">HoloROSBridge: </a>ROS Communication with HoloLens Device</strong></h4>
|
||||
<p>HoloROSBridge is a ROS (Melodic) package that communicates with the HoloLensSpatialMapping application running on the HoloLens device.</p>
|
||||
<h4 id="holo_nav_dash-operational-dashboard"><strong><a href="https://github.com/microsoft/HololensNavigation/tree/main/linux/holo_nav_dash">holo_nav_dash: </a> Operational Dashboard</strong></h4>
|
||||
<p>holo_nav_dash is a ROS (Melodic) package that provides a local http server and a browser-based operational interface for starting up and monitoring calibration and navigation operations.</p>
|
||||
<h4 id="navigation_launcher-ros-navigation-stack-launcher"><strong><a href="https://github.com/microsoft/HololensNavigation/tree/main/linux/navigation_launcher">navigation_launcher: </a> ROS Navigation Stack Launcher</strong></h4>
|
||||
<p>navigation_launcher is a ROS (Melodic) package that contains launch scripts for starting up components for the HoloLens stack, the HoloLens Navigation stack, and the ROS Navigation stack.</p>
|
||||
<h2 id="manipulation">Manipulation</h2>
|
||||
<p>In industrial applications, robotic object manipulation is common where actions are manually programmed and repeated behind safety barriers. In service-robotics scenarios, dynamic environments and safety considerations make the entire field much more challenging. Our projects explore solutions where HRI and Navigation technologies can be leveraged to allow robots to learn from humans to perform manipulation tasks safely and effectively in residential, workplace, and public environments. </p>
|
||||
<p>While we are working towards an open-source object-manipulation sample for service-robotics next year, our team-mates at Microsoft Bonsai are forging ahead with Autonomous Systems for industrial applications. Take a look at <a href="https://microsoft.github.io/moab/">Project Moab</a>.</p></div>
|
||||
|
@ -176,5 +186,5 @@
|
|||
|
||||
<!--
|
||||
MkDocs version : 1.0.4
|
||||
Build Date UTC : 2021-05-26 16:59:10
|
||||
Build Date UTC : 2021-05-26 19:34:49
|
||||
-->
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Двоичные данные
docs/sitemap.xml.gz
Двоичные данные
docs/sitemap.xml.gz
Двоичный файл не отображается.
Загрузка…
Ссылка в новой задаче