Moved files, removed stale architecture doc, and fixed links. (#252)

This commit is contained in:
wes-b 2019-04-19 15:07:09 -07:00 коммит произвёл GitHub
Родитель 529d0cf58a
Коммит 11085c3f4f
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
7 изменённых файлов: 7 добавлений и 63 удалений

Просмотреть файл

@ -79,8 +79,6 @@ add_subdirectory(tools)
if (K4A_BUILD_DOCS)
find_package(Doxygen 1.8.14 EXACT)
if (DOXYGEN_FOUND)
set(DOXYGEN_MAINPAGE ${CMAKE_CURRENT_SOURCE_DIR}/docs/sdk.md )
set(DOXYGEN_SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/include/k4a ${CMAKE_CURRENT_SOURCE_DIR}/include/k4arecord ${DOXYGEN_MAINPAGE})
# These variables are used in Doxyfile.in

6
CODE_OF_CONDUCT.md Normal file
Просмотреть файл

@ -0,0 +1,6 @@
# Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or
comments.

Просмотреть файл

Просмотреть файл

@ -53,7 +53,7 @@ For more instructions on running and writing tests see
[testing](docs/testing.md).
# Contribute
We welcome your contributions! Please see the [contribution guidelines](docs/contributing.md).
We welcome your contributions! Please see the [contribution guidelines](CONTRIBUTING.md).
## Feedback
For any feedback or to report a bug, please file a [GitHub Issue](https://github.com/Microsoft/Azure-Kinect-Sensor-SDK/issues).

Двоичные данные
docs/Architecture.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 65 KiB

Двоичные данные
docs/SDK Internal Architecture.vsdx

Двоичный файл не отображается.

Просмотреть файл

@ -1,60 +0,0 @@
# Kinect for Azure SDK
# Architecture
![alt text](Architecture.png)
Project Kinect for Azure is a sensor that contains a depth camera, color camera, an imu, and audio mic array. The SDK
provides interfaces to the depth, color, and IMU sensors. Audio and parts of the color sensor will be routed through
system primitive interfaces so that they may be used without the need for the SDK to be running.
The SDK is broken into a modular design that allows individual components to be tested in isolation before being
integrated into the system.
# Construction of the K4A SDK
The user will open and initialize the k4a SDK by calling k4a_device_open. The library will create:
1. depth_mcu
1. color_mcu
These two modules will create USB command (CMD) modules that are responsible for interfaceing with LIBUSB and
communincating with the hardware.
Next the library will create the calibration module, where calibration data will be stored and extrinsic data can be
convereted to offsets between sensors (as 1 example).
Once the calibration module has been created, the k4a library will create the following modules:
1. depth
1. color
1. imu
When creating these modules, they will recieve a handle to calibration, depth_mcu, and color_mcu so that the they can
communicate with the necessary modules.
Finally the K4A library will create the SDK API module and pass it calibration, color, depth, and imu handles to support
the public interfaces.
The creation of the 'correlated captures' module is on demand when correlated data is needed.
# Data flow and threads
The SDK is designed such that the caller owns most of the threads responsible for accessing the sensor. API calls to
configure, start, stop, and fetch capture data are all designed for the the user to call into the SDK and block if neccessary.
The 'usb_cmd' modules are an exception to this as they require a dedicate thread to keep LibUsb filled with buffers so
that we can stream depth and imu data to the SDK. (The color sensor will also likely have a thread. That part of the
design is still TBD.) The 'usb_cmd' thread will call a callback function to depth or imu. In the case of the depth
callback function for streamed date, the data will be briefly sent to the 'r2d' (raw to depth) module so the raw
depth capture can be converted to depth point map. Once the 'r2d' is done with the convertion the sample will be given to
the queue. If the user has called an API like k4a_device_get_capture on an empty queue, then the users thread will be
unblocked and allowed to return the newly provided capture.
If the user has configured the SDK for correlated captures, then the depth and color callback functions will also provide
the respective samples to the 'correlated captures' module.