79f872d133
Added link to arxiv technical report |
||
---|---|---|
.github/workflows | ||
Applications | ||
Build | ||
Sources | ||
.gitattributes | ||
.gitignore | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
Directory.Build.props | ||
LICENSE-APPLICATIONS.txt | ||
LICENSE.txt | ||
NuGet.Config | ||
Psi.sln | ||
QUICKSTART.md | ||
README.md | ||
SECURITY.md | ||
ThirdPartyNotices.txt | ||
build.sh |
README.md
Platform for Situated Intelligence
Platform for Situated Intelligence (or in short, \psi, pronounced like the Greek letter) is an open, extensible framework for development and research of multimodal, integrative-AI systems. Examples include multimodal interactive systems such as social robots and embodied conversational agents, mixed-reality systems, applications for ambient intelligence or smart spaces, etc. In essence, any application that processes streaming, sensor data (such as audio, video, depth, etc.), combines multiple (AI) technologies, and operates under latency constraints can benefit from the affordances provided by the framework.
The framework provides:
- a modern, performant infrastructure for working with multimodal, temporally streaming data
- a set of tools for multimodal data visualization, annotation, and processing
- an ecosystem of components for various sensors, processing technologies, and effectors
A high-level overview of the framework is available in this blog post. A webinar containing a brief introduction and tutorial on how to code with \psi is available as an online video. An in-depth description of the framework is available in this technical report.
What's New
03/14/2024: In addition to the next beta version 0.19, we are excited to announce the release of a new application called Situated Interactive Guidance Monitoring and Assistance (SIGMA). Built on \psi, SIGMA is a baseline prototype and testbed system intended to accelerate research on mixed-reality task assistive agents. It is available under a research-only license, and researchers can experiment with and build upon this prototype to investigate the many challenges with developing real-time interactive mixed-reality agents. Check it out!
12/08/2022: This week we have released beta version 0.18, continuing to refine support for building mixed reality applications with \psi, and further evolving debugging and visualization capabilities in PsiStudio.
04/21/2022: We've recently released beta version 0.17, which includes important updates to mixed reality support in \psi, including a set of tools for streaming data from a HoloLens 2 to a separate PC for data collection and export. The release also includes several updates to visualization and PsiStudio, the addition of a wrapper for running MaskRCNN models, updates to Azure Kinect Components, as well as some runtime updates and various other bug fixes.
07/29/2021: Check out this new sample application which shows how you can integrate \psi with the Teams bot architecture to develop bots that can participate in live meetings! (Please note that although it is hosted in the Microsoft Graph repository, you should post any issues or questions about this sample here).
05/02/2021: We've opened the Discussions tab on the repo and plan to use it as a place to connect with other members of our community. Please use these forums to ask questions, share ideas and feature requests, show off the cool components or projects you're building with \psi, and generally engage with other community members.
04/29/2021: Thanks to all who joined us for the Platform for Situated Intelligence Workshop! In this workshop, we discussed the basics on how to use the framework to accelerate your own work in the space of multimodal, integrative AI; presented some in-depth tutorials, demos, and previews of new features; and had a fun panel on how to build and nurture the open source community. All sessions were recorded, and you can find the videos on the event website now.
Getting Started
The core \psi infrastructure is built on .NET Standard and therefore runs both on Windows and Linux. Some components and tools are more specific and are available only on one or the other operating system. You can build \psi applications either by leveraging \psi NuGet packages, or by cloning and building the source code.
A Brief Introduction. To learn more about \psi and how to build applications with it, we recommend you start with the Brief Introduction tutorial, which will walk you through for some of the main concepts. It shows how to create a simple program, describes the core concept of a stream, and explains how to transform, synchronize, visualize, persist and replay streams from disk.
A Video Webinar. If you prefer getting started by watching a presentation about the framework, this video webinar gives a 30 minute high-level overview of the framework, followed by a 30 minute hands-on coding session illustrating how to write a first, simple application. Alternatively, for a shorter (~13 min) high-level overview, see this presentation we did as part of the Tech Minutes series.
Samples. If you would like to directly start from sample code, a number of small sample applications are also available, and several of them have walkthroughs that explain how the sample was constructed and point to additional documentation. We recommend you start with the samples below, listed in increasing order of complexity:
Name | Description | Cross-plat | Requirements |
---|---|---|---|
HelloWorld |
This sample provides the simplest starting point for creating a \psi application: it illustrates how to create and run a simple \psi pipeline containing a single stream. | Yes | None |
SimpleVoiceActivityDetector |
This sample captures audio from a microphone and performs voice activity detection, i.e., it computes a boolean signal indicating whether or not the audio contains voiced speech. | Yes | Microphone |
WebcamWithAudio for Windows or Linux |
This sample shows how to display images from a camera and the audio energy level from a microphone and illustrates the basics of stream synchronization. | Yes | Webcam and Microphone |
WhatIsThat |
This sample implements a simple application that uses an Azure Kinect sensor to detect the objects a person is pointing to. | Windows-only | Azure Kinect + Cognitive Services |
HoloLensSample |
This sample demonstrates how to develop Mixed Reality \psi applications for HoloLens 2. | UWP | HoloLens 2 |
Documentation. The documentation for \psi is available in the github project wiki. It contains many additional resources, including tutorials, other specialized topics, and a full API reference that can help you learn more about the framework.
Getting Help
If you find a bug or if you would like to request a new feature or additional documentation, please file an issue in github. Use the bug
label when filing issues that represent code defects, and provide enough information to reproduce the bug. Use the feature request
label to request new features, and use the documentation
label to request additional documentation.
Please also make use of the Discussions for asking general questions, sharing ideas about new features or applications you might be interested in, showing off the wonderful things you're building with \psi, and engaging with other community members.
Contributing
We are looking forward to engaging with the community to improve and evolve Platform for Situated Intelligence! We welcome contributions in many forms: from simply using it and filing issues and bugs, to writing and releasing your own new components, to creating pull requests for bug fixes or new features. The Contributing Guidelines page in the wiki describes many ways in which you can get involved, and some useful things to know before contributing to the code base.
To find more information about our future plans, please see the Roadmap document.
Who is Using
Platform for Situated Intelligence has been and is currently used in several industry and academic research labs, including (but not limited to):
- the Situated Interaction project, as well as other research projects at Microsoft Research.
- the PACCE team at IMT Atlantique / L2SN.
- the Interactive Robotics Group at MIT.
- the MultiComp Lab at Carnegie Mellon University.
- the Speech Language and Interactive Machines research group at Boise State University.
- the Qualitative Reasoning Group at Northwestern University.
- the Intelligent Human Perception Lab at USC Institute for Creative Technologies.
- the Teledia research group at Carnegie Mellon University.
- the F&M Computational, Affective, Robotic, and Ethical Sciences (F&M CARES) lab at Franklin and Marshall College.
- the Transportation, Bots, & Disability Lab at Carnegie Mellon University.
If you would like to be added to this list, just file a GitHub issue and label it with the whoisusing
label. Add a url for your research lab, website or project that you would like us to link to.
Technical Report
A more in-depth description of the framework is available in this technical report. Please cite as:
@misc{bohus2021platform,
title={Platform for Situated Intelligence},
author={Dan Bohus and Sean Andrist and Ashley Feniello and Nick Saw and Mihai Jalobeanu and Patrick Sweeney and Anne Loomis Thompson and Eric Horvitz},
year={2021},
eprint={2103.15975},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
Disclaimer
The codebase is currently in beta and various aspects of the framework are under active development. There are probably still bugs in the code and we may make breaking API changes.
While the Platform for Situated Intelligence source code and the Microsoft.Psi.*
NuGet packages are available under an MIT license, our code and NuGet packages have dependencies on other NuGet packages. If you build an application using Platform for Situated Intelligence, please check the licensing requirements for all referenced NuGet packages in your solution.
Licenses
Platform for Situated Intelligence is available under an MIT License, with the exception of all files under the Applications folder (including the SIGMA application), which are released under the Microsoft Research license agreement. See also Third Party Notices.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
Acknowledgments
We would like to thank our internal collaborators and external early adopters, including (but not limited to): Daniel McDuff, Kael Rowan, Lev Nachmanson and Mike Barnett at MSR, Chirag Raman and Louis-Phillipe Morency in the MultiComp Lab at CMU, as well as researchers in the SLIM research group at Boise State and the Qualitative Reasoning Group at Northwestern University.