Add Green Screen example (#619)
* Added beginnings of greenscreen example * More progress, try/catch * runs and sometimes gets all captures * WIP, but closer. * Single-camera greenscreen works! But it's slow * it's getting closer to working now and that's cool * further progress, not compiling * Calibration runs, unclear if correct * still WIP but coming along * still not working- calibration may not even be necessary * closer to working, sane results but not similar when moving calibration board * Still not perfect but consistently good * homogeneous matrix constructed reasonably * Greening partially working * CLeanup and fix averaging incorrectness * Added fix for reversed calibration points bug * Mostly comments * Working on a refactor and adding lots of explanation * Started refactor, fixed images desynchronizing * Better debugging and not dropping as many frames * debug commit * fixed slowness, starting to come together * More refactoring, mostly towards fewer vectors * yet more refactoring * Comment changes and made some long code a function * Removed unnecessary try-catch * More comments, cleanup, and less output params * Less output parameters * deleted extraneous function * Refactor a function to not have output args * another output param refactor * reformat comments * Added command-line options and fixed chessboard size issues * Explain some constants * Account for subordinate color-to-depth delay * threshold now a command line arg * Use Matx and factored transformation code mostly out * Only use one configuration object for each * Moved synchronization logic to MultiDeviceCapturer * minor comment clarifications * Cleaned up displaying a bit * Improved greenscreening logic * Calibration now uses many images * Minor cleanup * Fixed unclear indentation of comments * It's 'green screen', not 'greenscreen' * clangformat wants this, though it's not very clear * Set background completely green * Fixed incorrect usage of calibration * Added WIP README that provides info + background * Changed master-sub to main-backup in main to clarify purpose of devices * Hopefully sets up OpenCV as a dependency properly * Fixed typo in opencv install * Pre-search was failing, so don't pre-search * Construct new calibration object, not modify old * Consistency in camera name ordering * Fix out-of-date comment * More explanatory comment on transformations * Added copyright headers * Backup to secondary, hopefully more clear * Added more and better error messages * Refactor to not have vector of devices exposed * Remove GetOpt dependency, debug viz * Fixed issues with the merge * Attempt to get OpenCV conditionally required * Fix up some CMake issues and installation issues * Missed update to yaml * Added OpenCV library folder * Try to include opencv2 includes * Should not use that include dir like that * Install opencv from binaries on Windows attempt * Install opencv, add to path, tell cmake * Set env vars with logging commands * Don't bother with OpenCV for x86 builds * removed files that snuck in * Don't get OpenCV unless it's a 64-bit OS * Don't try OpenCV if we're building for x86 * no space * fail when decoding fails * Compile opencv example properly * Modify build because we need to know target arch * removed endif which causes compilation failure * Fixed up warning * Attempt to be able to use one or two cameras * Should properly check arch now for installing OpenCV * Needed quotes * Attempt to clean up azure pipelines logic * I don't actually know Azure Pipelines * Attempted cmake config fix * use variables better? * Don't think I should use expressions there * Another attempt to fix Azure Pipelines issues * Fixed weird opencv error by using BGRA not MJPG * One-device and two-device greenscreen modes work * Fixed OpenCV example test * Clarified comment * Temporarily disable Linux testing until test infra updated * attempt to fix occasional OpenCV download failures * Need PS6 for retry options * Don't install OpenCV unless specified. Fixes Linux build, hopefully * Add options for calibration timeouts and greenscreen duration * Now no longer assume OpenCV is required on Linux * Added single and double camera greenscreen example tests * Adjusted numbers in script for different calibration board in testing * See if issue is PATH or similar * Added better logging for error for functional unit tests * install opencv on test machine * Hopefully fix some of the issues with functional tests * Factor out install of opencv into a script * OpenCV-based tests should now not run unless OpenCV is installed * Don't install OpenCV on test machine but make sure it's there * typo * Try again to install OpenCV on test * Remove install opencv now that it's installed * Fixed a lingering "checkerboard" to "chessboard" * Use an Ubuntu 18.04 image and try to use the Ubuntu functional tests * Properly add the OpenCV tests if it's available * Don't bootstrap ubuntu because we have a docker image now * Don't install OpenCV, it's already in the Docker container * Remove confusing offset for devices in MultiDeviceCapturer * Build with Clang * No braces needed * A few more fixes from not using a vector for configs * More updates to README * Workaround for debug version of OpenCV on Windows * Explanations for the OpenCV LIBS issue and for green screen logic * Re-expanded correct sync window until firmware is updated * Greenscreen with background, not green * More modern CMake * No need to find package here * Added findopencv so no need to specify -DOpenCV_DIR
This commit is contained in:
Родитель
65834b9b3a
Коммит
ed6f22bf7d
|
@ -129,6 +129,33 @@ if ("${CMAKE_SYSTEM_NAME}" STREQUAL "Linux")
|
|||
set(CMAKE_INSTALL_RPATH "\$ORIGIN")
|
||||
endif()
|
||||
|
||||
set(TARGET_ARCH ${CMAKE_SYSTEM_PROCESSOR})
|
||||
|
||||
# CMake doesn't set the target processor correctly for MSVC
|
||||
if ("${CMAKE_C_COMPILER_ID}" STREQUAL "MSVC")
|
||||
if ("${CMAKE_CONFIGURATION_TYPES}" STREQUAL "")
|
||||
set(K4A_BINARY_DIR_DEBUG ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
|
||||
set(K4A_BINARY_DIR_RELEASE ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
|
||||
else()
|
||||
set(K4A_BINARY_DIR_DEBUG ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/Debug)
|
||||
set(K4A_BINARY_DIR_RELEASE ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/RelWithDebInfo)
|
||||
endif()
|
||||
|
||||
# Check what architecture we are building for. This assumes all 64-bit architectures are amd64, which will break
|
||||
# if we decide to support arm.
|
||||
if ("${CMAKE_SIZEOF_VOID_P}" STREQUAL "8")
|
||||
set(TARGET_ARCH "amd64")
|
||||
configure_file(k4a.props.in ${CMAKE_CURRENT_SOURCE_DIR}/src/csharp/k4a.x64.props)
|
||||
configure_file(StubGenerator.xml.in ${CMAKE_CURRENT_SOURCE_DIR}/src/csharp/StubGenerator.x64.xml)
|
||||
elseif("${CMAKE_SIZEOF_VOID_P}" STREQUAL "4")
|
||||
set(TARGET_ARCH "x86")
|
||||
configure_file(k4a.props.in ${CMAKE_CURRENT_SOURCE_DIR}/src/csharp/k4a.x86.props)
|
||||
configure_file(StubGenerator.xml.in ${CMAKE_CURRENT_SOURCE_DIR}/src/csharp/StubGenerator.x86.xml)
|
||||
else()
|
||||
message(FATAL_ERROR "Unknown architecture with size of void* = ${CMAKE_SIZEOF_VOID_P}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
add_subdirectory(examples)
|
||||
add_subdirectory(src)
|
||||
add_subdirectory(tests)
|
||||
|
@ -246,31 +273,6 @@ endif()
|
|||
# Generate .NET Version file.
|
||||
configure_file(VersionInfo.cs.in ${CMAKE_CURRENT_SOURCE_DIR}/src/csharp/VersionInfo.cs)
|
||||
|
||||
set(TARGET_ARCH ${CMAKE_SYSTEM_PROCESSOR})
|
||||
|
||||
# CMake doesn't set the target processor correctly for MSVC
|
||||
if ("${CMAKE_C_COMPILER_ID}" STREQUAL "MSVC")
|
||||
if ("${CMAKE_CONFIGURATION_TYPES}" STREQUAL "")
|
||||
set(K4A_BINARY_DIR_DEBUG ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
|
||||
set(K4A_BINARY_DIR_RELEASE ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
|
||||
else()
|
||||
set(K4A_BINARY_DIR_DEBUG ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/Debug)
|
||||
set(K4A_BINARY_DIR_RELEASE ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/RelWithDebInfo)
|
||||
endif()
|
||||
|
||||
if ("${CMAKE_SIZEOF_VOID_P}" STREQUAL "8")
|
||||
set(TARGET_ARCH "amd64")
|
||||
configure_file(k4a.props.in ${CMAKE_CURRENT_SOURCE_DIR}/src/csharp/k4a.x64.props)
|
||||
configure_file(StubGenerator.xml.in ${CMAKE_CURRENT_SOURCE_DIR}/src/csharp/StubGenerator.x64.xml)
|
||||
elseif("${CMAKE_SIZEOF_VOID_P}" STREQUAL "4")
|
||||
set(TARGET_ARCH "x86")
|
||||
configure_file(k4a.props.in ${CMAKE_CURRENT_SOURCE_DIR}/src/csharp/k4a.x86.props)
|
||||
configure_file(StubGenerator.xml.in ${CMAKE_CURRENT_SOURCE_DIR}/src/csharp/StubGenerator.x86.xml)
|
||||
else()
|
||||
message(FATAL_ERROR "Unknown architecture with size of void* = ${CMAKE_SIZEOF_VOID_P}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Packaging support
|
||||
set(CMAKE_INSTALL_DEFAULT_DIRECTORY_PERMISSIONS
|
||||
OWNER_READ OWNER_WRITE OWNER_EXECUTE
|
||||
|
|
|
@ -12,6 +12,8 @@ variables:
|
|||
value: '1.6.102075014'
|
||||
- name: 'NuGetPackageVersion'
|
||||
value: '1.2.0'
|
||||
- name: 'OpenCVPath'
|
||||
value: 'C:\OpenCV\Build\x64\vc14\'
|
||||
|
||||
trigger:
|
||||
batch: false
|
||||
|
@ -36,6 +38,7 @@ jobs:
|
|||
MSBuildPlatform: 'x64'
|
||||
MSBuildConfiguration: 'Debug'
|
||||
vmImage: 'vs2017-win2016'
|
||||
UsesOpenCV: 'TRUE'
|
||||
VS2017_x64-msvc_rel_ninja:
|
||||
CMakeArch: 'amd64'
|
||||
BuildGenerator: 'Ninja'
|
||||
|
@ -43,6 +46,7 @@ jobs:
|
|||
MSBuildPlatform: 'x64'
|
||||
MSBuildConfiguration: 'Release'
|
||||
vmImage: 'vs2017-win2016'
|
||||
UsesOpenCV: 'TRUE'
|
||||
VS2017_x64-msvc_debug_msbuild:
|
||||
CMakeArch: 'amd64'
|
||||
BuildGenerator: 'Visual Studio 15 2017 Win64'
|
||||
|
@ -50,6 +54,7 @@ jobs:
|
|||
MSBuildPlatform: 'x64'
|
||||
MSBuildConfiguration: 'Debug'
|
||||
vmImage: 'vs2017-win2016'
|
||||
UsesOpenCV: 'TRUE'
|
||||
VS2017_x64-msvc_rel_msbuild:
|
||||
CMakeArch: 'amd64'
|
||||
BuildGenerator: 'Visual Studio 15 2017 Win64'
|
||||
|
@ -57,6 +62,7 @@ jobs:
|
|||
MSBuildPlatform: 'x64'
|
||||
MSBuildConfiguration: 'Release'
|
||||
vmImage: 'vs2017-win2016'
|
||||
UsesOpenCV: 'TRUE'
|
||||
VS2017_x86-msvc_debug_ninja:
|
||||
CMakeArch: 'x86'
|
||||
BuildGenerator: 'Ninja'
|
||||
|
@ -64,6 +70,7 @@ jobs:
|
|||
MSBuildPlatform: 'x86'
|
||||
MSBuildConfiguration: 'Debug'
|
||||
vmImage: 'vs2017-win2016'
|
||||
UsesOpenCV: 'FALSE'
|
||||
VS2017_x86-msvc_rel_ninja:
|
||||
CMakeArch: 'x86'
|
||||
BuildGenerator: 'Ninja'
|
||||
|
@ -71,6 +78,7 @@ jobs:
|
|||
MSBuildPlatform: 'x86'
|
||||
MSBuildConfiguration: 'Release'
|
||||
vmImage: 'vs2017-win2016'
|
||||
UsesOpenCV: 'FALSE'
|
||||
VS2017_x86-msvc_debug_msbuild:
|
||||
CMakeArch: 'x86'
|
||||
BuildGenerator: 'Visual Studio 15 2017'
|
||||
|
@ -78,6 +86,7 @@ jobs:
|
|||
MSBuildPlatform: 'x86'
|
||||
MSBuildConfiguration: 'Debug'
|
||||
vmImage: 'vs2017-win2016'
|
||||
UsesOpenCV: 'FALSE'
|
||||
VS2017_x86-msvc_rel_msbuild:
|
||||
CMakeArch: 'x86'
|
||||
BuildGenerator: 'Visual Studio 15 2017'
|
||||
|
@ -85,6 +94,7 @@ jobs:
|
|||
MSBuildPlatform: 'x86'
|
||||
MSBuildConfiguration: 'Release'
|
||||
vmImage: 'vs2017-win2016'
|
||||
UsesOpenCV: 'FALSE'
|
||||
VS2019_x64-msvc_debug_ninja:
|
||||
CMakeArch: 'amd64'
|
||||
BuildGenerator: 'Ninja'
|
||||
|
@ -92,6 +102,7 @@ jobs:
|
|||
MSBuildPlatform: 'x64'
|
||||
MSBuildConfiguration: 'Debug'
|
||||
vmImage: 'windows-2019'
|
||||
UsesOpenCV: 'TRUE'
|
||||
VS2019_x64-msvc_rel_ninja:
|
||||
CMakeArch: 'amd64'
|
||||
BuildGenerator: 'Ninja'
|
||||
|
@ -99,6 +110,7 @@ jobs:
|
|||
MSBuildPlatform: 'x64'
|
||||
MSBuildConfiguration: 'Release'
|
||||
vmImage: 'windows-2019'
|
||||
UsesOpenCV: 'TRUE'
|
||||
# VS2019 msbuild generators blocked by issue Microsoft/azure-pipelines-image-generation#754
|
||||
# VS2019_x86_64-pc-windows-msvc_debug_msbuild:
|
||||
# CMakeArch: 'amd64'
|
||||
|
@ -121,6 +133,7 @@ jobs:
|
|||
MSBuildPlatform: 'x86'
|
||||
MSBuildConfiguration: 'Debug'
|
||||
vmImage: 'windows-2019'
|
||||
UsesOpenCV: 'FALSE'
|
||||
VS2019_x86-msvc_rel_ninja:
|
||||
CMakeArch: 'x86'
|
||||
BuildGenerator: 'Ninja'
|
||||
|
@ -128,6 +141,7 @@ jobs:
|
|||
MSBuildPlatform: 'x86'
|
||||
MSBuildConfiguration: 'Release'
|
||||
vmImage: 'windows-2019'
|
||||
UsesOpenCV: 'FALSE'
|
||||
# VS2019 msbuild generators blocked by issue Microsoft/azure-pipelines-image-generation#754
|
||||
# VS2019_x86-pc-windows-msvc_debug_msbuild:
|
||||
# CMakeArch: 'x86'
|
||||
|
@ -190,6 +204,10 @@ jobs:
|
|||
modifyEnvironment: true
|
||||
condition: and(succeeded(), eq(variables['vmImage'], 'windows-2019'))
|
||||
|
||||
- pwsh: '$(Build.SourcesDirectory)/scripts/install-opencv.ps1'
|
||||
displayName: 'Install OpenCV'
|
||||
condition: and(succeeded(), eq(variables['UsesOpenCV'], 'TRUE'))
|
||||
|
||||
- powershell: |
|
||||
$ErrorActionPreference = "Stop"
|
||||
|
||||
|
@ -235,7 +253,7 @@ jobs:
|
|||
|
||||
- script: |
|
||||
set PATH=%PATH%;$(Build.BinariesDirectory)\nasmexe\nasm-2.14.02
|
||||
cmake -G "$(BuildGenerator)" "-DCMAKE_VERBOSE_MAKEFILE=ON" "$(Build.SourcesDirectory)" "-DCMAKE_BUILD_TYPE=$(CMakeConfiguration)" "-DK4A_SOURCE_LINK=ON"
|
||||
cmake -G "$(BuildGenerator)" "-DCMAKE_VERBOSE_MAKEFILE=ON" "-DOpenCV_REQUIRED=$(UsesOpenCV)" "$(Build.SourcesDirectory)" "-DCMAKE_BUILD_TYPE=$(CMakeConfiguration)" "-DK4A_SOURCE_LINK=ON"
|
||||
workingDirectory: '$(Build.BinariesDirectory)'
|
||||
displayName: 'CMake Configure'
|
||||
env:
|
||||
|
@ -360,15 +378,19 @@ jobs:
|
|||
x64-clang_debug_ninja:
|
||||
CMakeLinuxTargetTriple: 'x86_64-linux-clang'
|
||||
CMakeConfiguration: 'debug'
|
||||
UsesOpenCV: 'TRUE'
|
||||
x64-gnu_debug_ninja:
|
||||
CMakeLinuxTargetTriple: 'x86_64-linux-gnu'
|
||||
CMakeConfiguration: 'debug'
|
||||
UsesOpenCV: 'TRUE'
|
||||
x64-clang_rel_ninja:
|
||||
CMakeLinuxTargetTriple: 'x86_64-linux-clang'
|
||||
CMakeConfiguration: 'relwithdebinfo'
|
||||
UsesOpenCV: 'TRUE'
|
||||
x64-gnu_rel_ninja:
|
||||
CMakeLinuxTargetTriple: 'x86_64-linux-gnu'
|
||||
CMakeConfiguration: 'relwithdebinfo'
|
||||
UsesOpenCV: 'TRUE'
|
||||
|
||||
# 32-bit builds are currently broken
|
||||
# i386-unknown-linux-clang_debug_ninja:
|
||||
|
@ -401,7 +423,8 @@ jobs:
|
|||
fi
|
||||
displayName: 'Check Line Endings'
|
||||
|
||||
- script: 'cmake -GNinja -DCMAKE_VERBOSE_MAKEFILE=ON "$(Build.SourcesDirectory)" "-DCMAKE_BUILD_TYPE=$(CMakeConfiguration)" "-DCMAKE_TOOLCHAIN_FILE=$(Build.SourcesDirectory)/cmake/toolchains/$(CMakeLinuxTargetTriple).cmake"'
|
||||
# OpenCV always required on Linux
|
||||
- script: 'cmake -GNinja -DCMAKE_VERBOSE_MAKEFILE=ON -DOpenCV_REQUIRED=$(UsesOpenCV) "$(Build.SourcesDirectory)" "-DCMAKE_BUILD_TYPE=$(CMakeConfiguration)" "-DCMAKE_TOOLCHAIN_FILE=$(Build.SourcesDirectory)/cmake/toolchains/$(CMakeLinuxTargetTriple).cmake"'
|
||||
workingDirectory: '$(Build.BinariesDirectory)'
|
||||
displayName: 'CMake Configure'
|
||||
|
||||
|
@ -544,7 +567,6 @@ jobs:
|
|||
filePath: '$(Build.SourcesDirectory)\doxygen\PublishDocs.ps1'
|
||||
condition: and(eq(variables['System.CollectionId'], 'cb55739e-4afe-46a3-970f-1b49d8ee7564'), and(succeeded(), ne(variables['Build.Reason'], 'PullRequest')))
|
||||
|
||||
|
||||
- ${{ if eq(variables['System.CollectionId'], 'cb55739e-4afe-46a3-970f-1b49d8ee7564') }}:
|
||||
- job: WindowsFunctionalTests
|
||||
displayName: Windows Functional Test
|
||||
|
@ -646,6 +668,9 @@ jobs:
|
|||
targetFolder: "$(System.ArtifactsDirectory)/amd64-windows-msvc-RelWithDebInfo/bin/Release/Microsoft.AzureKinect.FunctionalTests/netcoreapp2.1/"
|
||||
flattenFolders: true
|
||||
|
||||
- script: 'if exist $(OpenCVPath) ( set PATH=%PATH%;$(OpenCVPath)\bin ) else ( exit 1 )'
|
||||
displayName: 'Add OpenCV to PATH'
|
||||
|
||||
- script: '.\amd64-windows-msvc-RelWithDebInfo\bin\AzureKinectFirmwareTool.exe -r'
|
||||
workingDirectory: '$(System.ArtifactsDirectory)'
|
||||
displayName: 'Reset K4A Device'
|
||||
|
@ -712,7 +737,8 @@ jobs:
|
|||
|
||||
# The account on the NUC does not currently give this user permission to install, so we have done so manually.
|
||||
# This captures the dependency NuGet has in this environment. We locally ran 'apt-get install mono-complete'
|
||||
#- script: 'apt-get install mono'
|
||||
# We also need opencv, so install that too
|
||||
#- script: 'apt-get install mono libopencv-dev'
|
||||
# workingDirectory: '$(System.ArtifactsDirectory)'
|
||||
# displayName: "Ensure Mono is installed"
|
||||
|
||||
|
|
|
@ -0,0 +1,11 @@
|
|||
if (OpenCV_FIND_REQUIRED)
|
||||
set (OpenCV_REQ_FLAG "REQUIRED")
|
||||
else()
|
||||
set (OpenCV_REQ_FLAG)
|
||||
endif()
|
||||
|
||||
if ("${CMAKE_SYSTEM_NAME}" STREQUAL "Windows")
|
||||
find_package (OpenCV CONFIG PATHS "C:/opencv/build/x64/vc14/lib" ${OpenCV_REQ_FLAG})
|
||||
else()
|
||||
find_package (OpenCV CONFIG ${OpenCV_REQ_FLAG})
|
||||
endif ()
|
|
@ -6,8 +6,18 @@ add_subdirectory(enumerate)
|
|||
add_subdirectory(playback_external_sync)
|
||||
add_subdirectory(k4arecord_custom_track)
|
||||
add_subdirectory(fastpointcloud)
|
||||
add_subdirectory(opencv_compatibility)
|
||||
add_subdirectory(streaming)
|
||||
add_subdirectory(transformation)
|
||||
add_subdirectory(undistort)
|
||||
add_subdirectory(viewer)
|
||||
|
||||
if (OpenCV_REQUIRED)
|
||||
find_package(OpenCV REQUIRED)
|
||||
else()
|
||||
find_package(OpenCV)
|
||||
endif ()
|
||||
|
||||
if (OpenCV_FOUND)
|
||||
add_subdirectory(green_screen)
|
||||
add_subdirectory(opencv_compatibility)
|
||||
endif ()
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License.
|
||||
add_executable(green_screen main.cpp)
|
||||
target_include_directories( green_screen PRIVATE ${OpenCV_INCLUDE_DIRS} )
|
||||
|
||||
|
||||
# OpenCV_LIBS, by default, is picking up the debug version of opencv on Windows even in release mode, which was causing a dependency on non-redistributable Visual Studio dlls.
|
||||
if (${CMAKE_SYSTEM_NAME} STREQUAL "Windows")
|
||||
target_link_libraries(green_screen PRIVATE k4a::k4a ${OpenCV_DIR}/Opencv_world320.lib)
|
||||
else()
|
||||
target_link_libraries(green_screen PRIVATE k4a::k4a ${OpenCV_LIBS})
|
||||
endif()
|
|
@ -0,0 +1,221 @@
|
|||
// Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
// Licensed under the MIT License.
|
||||
#pragma once
|
||||
|
||||
#include <chrono>
|
||||
#include <vector>
|
||||
#include <k4a/k4a.hpp>
|
||||
|
||||
// This is the maximum difference between when we expected an image's timestamp to be and when it actually occurred.
|
||||
// TODO waiting on a firmware update to be returned to 50
|
||||
constexpr std::chrono::microseconds MAX_ALLOWABLE_TIME_OFFSET_ERROR_FOR_IMAGE_TIMESTAMP(33000);
|
||||
// constexpr std::chrono::microseconds MAX_ALLOWABLE_TIME_OFFSET_ERROR_FOR_IMAGE_TIMESTAMP(50);
|
||||
|
||||
class MultiDeviceCapturer
|
||||
{
|
||||
public:
|
||||
// Set up all the devices. Note that the index order isn't necessarily preserved, because we might swap with master
|
||||
MultiDeviceCapturer(const vector<uint32_t> &device_indices, int32_t color_exposure_usec, int32_t powerline_freq)
|
||||
{
|
||||
bool master_found = false;
|
||||
if (device_indices.size() == 0)
|
||||
{
|
||||
throw std::runtime_error("Capturer must be passed at least one camera!");
|
||||
}
|
||||
for (uint32_t i : device_indices)
|
||||
{
|
||||
k4a::device next_device = k4a::device::open(i); // construct a device using this index
|
||||
// If you want to synchronize cameras, you need to manually set both their exposures
|
||||
next_device.set_color_control(K4A_COLOR_CONTROL_EXPOSURE_TIME_ABSOLUTE,
|
||||
K4A_COLOR_CONTROL_MODE_MANUAL,
|
||||
color_exposure_usec);
|
||||
// This setting compensates for the flicker of lights due to the frequency of AC power in your region. If
|
||||
// you are in an area with 50 Hz power, this may need to be updated (check the docs for
|
||||
// k4a_color_control_command_t)
|
||||
next_device.set_color_control(K4A_COLOR_CONTROL_POWERLINE_FREQUENCY,
|
||||
K4A_COLOR_CONTROL_MODE_MANUAL,
|
||||
powerline_freq);
|
||||
// We treat the first device found with a sync out cable attached as the master. If it's not supposed to be,
|
||||
// unplug the cable from it. Also, if there's only one device, just use it
|
||||
if ((next_device.is_sync_out_connected() && !master_found) || device_indices.size() == 1)
|
||||
{
|
||||
master_device = std::move(next_device);
|
||||
master_found = true;
|
||||
}
|
||||
else if (!next_device.is_sync_in_connected() && !next_device.is_sync_out_connected())
|
||||
{
|
||||
throw std::runtime_error("Each device must have sync in or sync out connected!");
|
||||
}
|
||||
else if (!next_device.is_sync_in_connected())
|
||||
{
|
||||
throw std::runtime_error("Non-master camera found that doesn't have the sync in port connected!");
|
||||
}
|
||||
else
|
||||
{
|
||||
subordinate_devices.emplace_back(std::move(next_device));
|
||||
}
|
||||
}
|
||||
if (!master_found)
|
||||
{
|
||||
throw std::runtime_error("No device with sync out connected found!");
|
||||
}
|
||||
}
|
||||
|
||||
// configs[0] should be the master, the rest subordinate
|
||||
void start_devices(const k4a_device_configuration_t &master_config, const k4a_device_configuration_t &sub_config)
|
||||
{
|
||||
// Start by starting all of the subordinate devices. They must be started before the master!
|
||||
for (k4a::device &d : subordinate_devices)
|
||||
{
|
||||
d.start_cameras(&sub_config);
|
||||
}
|
||||
// Lastly, start the master device
|
||||
master_device.start_cameras(&master_config);
|
||||
}
|
||||
|
||||
// Blocks until we have synchronized captures stored in the output. First is master, rest are subordinates
|
||||
std::vector<k4a::capture> get_synchronized_captures(const k4a_device_configuration_t &sub_config,
|
||||
bool compare_sub_depth_instead_of_color = false)
|
||||
{
|
||||
// Dealing with the synchronized cameras is complex. The Azure Kinect DK:
|
||||
// (a) does not guarantee exactly equal timestamps between depth and color or between cameras (delays can
|
||||
// be configured but timestamps will only be approximately the same)
|
||||
// (b) does not guarantee that, if the two most recent images were synchronized, that calling get_capture
|
||||
// just once on each camera will still be synchronized.
|
||||
// There are several reasons for all of this. Internally, devices keep a queue of a few of the captured images
|
||||
// and serve those images as requested by get_capture(). However, images can also be dropped at any moment, and
|
||||
// one device may have more images ready than another device at a given moment, et cetera.
|
||||
//
|
||||
// Also, the process of synchronizing is complex. The cameras are not guaranteed to exactly match in all of
|
||||
// their timestamps when synchronized (though they should be very close). All delays are relative to the master
|
||||
// camera's color camera. To deal with these complexities, we employ a fairly straightforward algorithm. Start
|
||||
// by reading in two captures, then if the camera images were not taken at roughly the same time read a new one
|
||||
// from the device that had the older capture until the timestamps roughly match.
|
||||
|
||||
// The captures used in the loop are outside of it so that they can persist across loop iterations. This is
|
||||
// necessary because each time this loop runs we'll only update the older capture.
|
||||
// The captures are stored in a vector where the first element of the vector is the master capture and
|
||||
// subsequent elements are subordinate captures
|
||||
std::vector<k4a::capture> captures(subordinate_devices.size() + 1); // add 1 for the master
|
||||
size_t current_index = 0;
|
||||
master_device.get_capture(&captures[current_index], std::chrono::milliseconds{ K4A_WAIT_INFINITE });
|
||||
++current_index;
|
||||
for (k4a::device &d : subordinate_devices)
|
||||
{
|
||||
d.get_capture(&captures[current_index], std::chrono::milliseconds{ K4A_WAIT_INFINITE });
|
||||
++current_index;
|
||||
}
|
||||
|
||||
// If there are no subordinate devices, just return captures which only has the master image
|
||||
if (subordinate_devices.empty())
|
||||
{
|
||||
return captures;
|
||||
}
|
||||
|
||||
bool have_synced_images = false;
|
||||
while (!have_synced_images)
|
||||
{
|
||||
k4a::image master_color_image = captures[0].get_color_image();
|
||||
std::chrono::microseconds master_color_image_time = master_color_image.get_device_timestamp();
|
||||
|
||||
for (size_t i = 0; i < subordinate_devices.size(); ++i)
|
||||
{
|
||||
k4a::image sub_image;
|
||||
if (compare_sub_depth_instead_of_color)
|
||||
{
|
||||
sub_image = captures[i + 1].get_depth_image(); // offset of 1 because master capture is at front
|
||||
}
|
||||
else
|
||||
{
|
||||
sub_image = captures[i + 1].get_color_image(); // offset of 1 because master capture is at front
|
||||
}
|
||||
|
||||
if (master_color_image && sub_image)
|
||||
{
|
||||
std::chrono::microseconds sub_image_time = sub_image.get_device_timestamp();
|
||||
// The subordinate's color image timestamp, ideally, is the master's color image timestamp plus the
|
||||
// delay we configured between the master device color camera and subordinate device color camera
|
||||
std::chrono::microseconds expected_sub_image_time =
|
||||
master_color_image_time +
|
||||
std::chrono::microseconds{ sub_config.subordinate_delay_off_master_usec } +
|
||||
std::chrono::microseconds{ sub_config.depth_delay_off_color_usec };
|
||||
std::chrono::microseconds sub_image_time_error = sub_image_time - expected_sub_image_time;
|
||||
// The time error's absolute value must be within the permissible range. So, for example, if
|
||||
// MAX_ALLOWABLE_TIME_OFFSET_ERROR_FOR_IMAGE_TIMESTAMP is 2, offsets of -2, -1, 0, 1, and -2 are
|
||||
// permitted
|
||||
if (sub_image_time_error < -MAX_ALLOWABLE_TIME_OFFSET_ERROR_FOR_IMAGE_TIMESTAMP)
|
||||
{
|
||||
// Example, where MAX_ALLOWABLE_TIME_OFFSET_ERROR_FOR_IMAGE_TIMESTAMP is 1
|
||||
// time t=1 t=2 t=3
|
||||
// actual timestamp x . .
|
||||
// expected timestamp . . x
|
||||
// error: 1 - 3 = -2, which is less than the worst-case-allowable offset of -1
|
||||
// the subordinate camera image timestamp was earlier than it is allowed to be. This means the
|
||||
// subordinate is lagging and we need to update the subordinate to get the subordinate caught up
|
||||
std::cout << "Subordinate lagging...\n";
|
||||
subordinate_devices[i].get_capture(&captures[i + 1],
|
||||
std::chrono::milliseconds{ K4A_WAIT_INFINITE });
|
||||
break;
|
||||
}
|
||||
else if (sub_image_time_error > MAX_ALLOWABLE_TIME_OFFSET_ERROR_FOR_IMAGE_TIMESTAMP)
|
||||
{
|
||||
// Example, where MAX_ALLOWABLE_TIME_OFFSET_ERROR_FOR_IMAGE_TIMESTAMP is 1
|
||||
// time t=1 t=2 t=3
|
||||
// actual timestamp . . x
|
||||
// expected timestamp x . .
|
||||
// error: 3 - 1 = 2, which is more than the worst-case-allowable offset of 1
|
||||
// the subordinate camera image timestamp was later than it is allowed to be. This means the
|
||||
// subordinate is ahead and we need to update the master to get the master caught up
|
||||
std::cout << "Master lagging...\n";
|
||||
master_device.get_capture(&captures[0], std::chrono::milliseconds{ K4A_WAIT_INFINITE });
|
||||
break;
|
||||
}
|
||||
else
|
||||
{
|
||||
// These captures are sufficiently synchronized. If we've gotten to the end, then all are
|
||||
// synchronized.
|
||||
if (i == subordinate_devices.size() - 1)
|
||||
{
|
||||
have_synced_images = true; // now we'll finish the for loop and then exit the while loop
|
||||
}
|
||||
}
|
||||
}
|
||||
else if (!master_color_image)
|
||||
{
|
||||
std::cout << "Master image was bad!\n";
|
||||
master_device.get_capture(&captures[0], std::chrono::milliseconds{ K4A_WAIT_INFINITE });
|
||||
break;
|
||||
}
|
||||
else if (!sub_image)
|
||||
{
|
||||
std::cout << "Subordinate image was bad!" << endl;
|
||||
subordinate_devices[i].get_capture(&captures[i + 1],
|
||||
std::chrono::milliseconds{ K4A_WAIT_INFINITE });
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
// if we've made it to here, it means that we have synchronized captures.
|
||||
return captures;
|
||||
}
|
||||
|
||||
const k4a::device &get_master_device() const
|
||||
{
|
||||
return master_device;
|
||||
}
|
||||
|
||||
const k4a::device &get_subordinate_device_by_index(size_t i) const
|
||||
{
|
||||
// devices[0] is the master. There are only devices.size() - 1 others. So, indices greater or equal are invalid
|
||||
if (i >= subordinate_devices.size())
|
||||
{
|
||||
throw std::runtime_error("Subordinate index too large!");
|
||||
}
|
||||
return subordinate_devices[i];
|
||||
}
|
||||
|
||||
private:
|
||||
// Once the constuctor finishes, devices[0] will always be the master
|
||||
k4a::device master_device;
|
||||
std::vector<k4a::device> subordinate_devices;
|
||||
};
|
|
@ -0,0 +1,50 @@
|
|||
# Green Screen Example
|
||||
|
||||
The goal of the green screen example is to demonstrate best practices for using multiple Azure Kinect devices, with an emphasis on synchronization and calibration (the 'green screen' code is only a small portion of the logic). In particular, the green screen application showcases a physical limitation of the hardware and how it can be mostly addressed using another device.
|
||||
|
||||
## What does this application *do*?
|
||||
|
||||
The green screen example displays the scene as observed by one of the cameras (the 'main' camera). Using the camera's depth, it will paint over anything beyond its depth threshold with green, making it appear as if only the things in the foreground are there. It will fill in missing details with the 'backup' camera, if possible, resulting in a better green screen than the main camera could achieve alone.
|
||||
|
||||
## Why use two cameras? Isn't one good enough?
|
||||
|
||||
It's true that one camera can get you most of the way to a good solution in this case. However, if you only use one camera (either by using the single-camera mode, or just covering the backup camera), and something in the scene is closer to the camera than something else in the scene (for example, if you hold out an arm), you should see a "shadow" of green pixels on the object in the background near the edge of the obstructing object.
|
||||
|
||||
Why? The answer comes back to the physical construction of the Azure Kinect. The color camera and the depth camera are physically offset. Therefore, it's possible for the color camera to be able to see something that the depth camera cannot. If the depth camera cannot see a section of the image that the color camera can, when the depth image is transformed into the color camera space, segments of the transformed image that correspond to occluded images. These 'invalid' pixels are set to 0. However, using another camera's perspective, some of those missing values can be filled in using the secondary depth camera, which can (hopefully) see those parts of an object that are occluded from the main depth camera.
|
||||
|
||||
TODO add images
|
||||
|
||||
## Installation instructions
|
||||
|
||||
This example requires OpenCV to be installed to build. To ensure it will be built, ensure that OpenCV is found by adding `-DOpenCV_REQUIRED` to the `cmake` command.
|
||||
|
||||
### OpenCV Installation Instructions
|
||||
|
||||
#### Linux
|
||||
|
||||
`sudo apt install libopencv-dev`
|
||||
|
||||
#### Windows
|
||||
|
||||
Our recommended way of getting OpenCV on Windows is by installing pre-built libraries. There's even a PowerShell script that'll do much of the work for you in `scripts/install-opencv.ps1`. This will place OpenCV in your `C:\` folder. Next, you will need to add it to your PATH. You can do this using PowerShell's SetEnvironmentVariable command (TODO just do this in the script).
|
||||
|
||||
## Running the program
|
||||
|
||||
Run the `green_screen` executable in the `bin` folder with no arguments for usage details and defaults, and then fill in any customizations you might need.
|
||||
## A Note on Calibration
|
||||
|
||||
This program relies on transforming the backup camera's depth image into the color camera's space. This transformation requires knowing the transformation between the two cameras. To find that transformation, we must calibrate the cameras. This program relies on OpenCV's chessboard calibration functions. As a result, you should have a chessboard pattern to use while calibrating the cameras. If you don't have one, print one out. You can find a 9x6 one [here](https://docs.opencv.org/2.4/_downloads/pattern.png) (note that the 9x6 comes from the interior corners, not the number of squares). The example requires the calibration board to be in view for both devices' color cameras for many frames, so make sure it's visible to both cameras.
|
||||
|
||||
Also, DO NOT move the cameras during or after calibration! Changing that translation will cause the backup camera to provide inaccurate information.
|
||||
|
||||
## The Green Screen
|
||||
|
||||
Once calibration has completed, a new screen should pop up with a steady video stream. If it's working right, everything further from the camera than the threshold will appear as green.
|
||||
|
||||
# Potential reasons for failure
|
||||
|
||||
- The range of the depth camera is limited by the range of the depth camera, which depends on the depth mode. You may need to change the depth mode depending on how far you need to have your cutoff. See [here](https://docs.microsoft.com/en-us/azure/Kinect-dk/hardware-specification) for the specs.
|
||||
|
||||
- If you're on Linux and you're getting strange libusb or libuvc errors when you try to use more than one camera, see [here](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/485).
|
||||
|
||||
- If you're on Windows and you're getting a popup about not being able to find an OpenCV dll, you may need to re-check your PATH to make sure it has that dll in it.
|
|
@ -0,0 +1,601 @@
|
|||
// Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
// Licensed under the MIT License.
|
||||
#include <algorithm>
|
||||
#include <iostream>
|
||||
#include <vector>
|
||||
#include <string>
|
||||
#include <chrono>
|
||||
#include <limits>
|
||||
|
||||
#include <k4a/k4a.hpp>
|
||||
#include <opencv2/calib3d.hpp>
|
||||
#include <opencv2/core.hpp>
|
||||
#include <opencv2/imgproc.hpp>
|
||||
#include <opencv2/highgui.hpp>
|
||||
|
||||
using std::cout;
|
||||
using std::endl;
|
||||
using std::vector;
|
||||
|
||||
#include "transformation.h"
|
||||
#include "MultiDeviceCapturer.h"
|
||||
|
||||
// Allowing at least 160 microseconds between depth cameras should ensure they do not interfere with one another.
|
||||
constexpr uint32_t MIN_TIME_BETWEEN_DEPTH_CAMERA_PICTURES_USEC = 160;
|
||||
|
||||
// ideally, we could generalize this to many OpenCV types
|
||||
static cv::Mat color_to_opencv(const k4a::image &im);
|
||||
static cv::Mat depth_to_opencv(const k4a::image &im);
|
||||
static cv::Matx33f calibration_to_color_camera_matrix(const k4a::calibration &cal);
|
||||
static Transformation get_depth_to_color_transformation_from_calibration(const k4a::calibration &cal);
|
||||
static k4a::calibration construct_device_to_device_calibration(const k4a::calibration &main_cal,
|
||||
const k4a::calibration &secondary_cal,
|
||||
const Transformation &secondary_to_main);
|
||||
static vector<float> calibration_to_color_camera_dist_coeffs(const k4a::calibration &cal);
|
||||
static bool find_chessboard_corners_helper(const cv::Mat &main_color_image,
|
||||
const cv::Mat &secondary_color_image,
|
||||
const cv::Size &chessboard_pattern,
|
||||
vector<cv::Point2f> &main_chessboard_corners,
|
||||
vector<cv::Point2f> &secondary_chessboard_corners);
|
||||
static Transformation stereo_calibration(const k4a::calibration &main_calib,
|
||||
const k4a::calibration &secondary_calib,
|
||||
const vector<vector<cv::Point2f>> &main_chessboard_corners_list,
|
||||
const vector<vector<cv::Point2f>> &secondary_chessboard_corners_list,
|
||||
const cv::Size &image_size,
|
||||
const cv::Size &chessboard_pattern,
|
||||
float chessboard_square_length);
|
||||
static k4a_device_configuration_t get_main_config();
|
||||
static k4a_device_configuration_t get_secondary_config();
|
||||
static Transformation calibrate_devices(MultiDeviceCapturer &capturer,
|
||||
const k4a_device_configuration_t &main_config,
|
||||
const k4a_device_configuration_t &secondary_config,
|
||||
const cv::Size &chessboard_pattern,
|
||||
float chessboard_square_length,
|
||||
double calibration_timeout);
|
||||
static k4a::image create_depth_image_like(const k4a::image &im);
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
float chessboard_square_length = 0.; // must be included in the input params
|
||||
int32_t color_exposure_usec = 8000; // somewhat reasonable default exposure time
|
||||
int32_t powerline_freq = 2; // default to a 60 Hz powerline
|
||||
cv::Size chessboard_pattern(0, 0); // height, width. Both need to be set.
|
||||
uint16_t depth_threshold = 1000; // default to 1 meter
|
||||
size_t num_devices = 0;
|
||||
double calibration_timeout = 60.0; // default to timing out after 60s of trying to get calibrated
|
||||
double greenscreen_duration = std::numeric_limits<double>::max(); // run forever
|
||||
|
||||
vector<uint32_t> device_indices{ 0 }; // Set up a MultiDeviceCapturer to handle getting many synchronous captures
|
||||
// Note that the order of indices in device_indices is not necessarily
|
||||
// preserved because MultiDeviceCapturer tries to find the master device based
|
||||
// on which one has sync out plugged in. Start with just { 0 }, and add
|
||||
// another if needed
|
||||
|
||||
if (argc < 5)
|
||||
{
|
||||
cout << "Usage: green_screen <num-cameras> <board-height> <board-width> <board-square-length> "
|
||||
"[depth-threshold-mm (default 1000)] [color-exposure-time-usec (default 8000)] "
|
||||
"[powerline-frequency-mode (default 2 for 60 Hz)] [calibration-timeout-sec (default 60)]"
|
||||
"[greenscreen-duration-sec (default infinity- run forever)]"
|
||||
<< endl;
|
||||
throw std::runtime_error("Not enough arguments!");
|
||||
}
|
||||
else
|
||||
{
|
||||
num_devices = static_cast<size_t>(atoi(argv[1]));
|
||||
if (num_devices > k4a::device::get_installed_count())
|
||||
{
|
||||
throw(std::runtime_error("Not enough cameras plugged in!"));
|
||||
}
|
||||
chessboard_pattern.height = atoi(argv[2]);
|
||||
chessboard_pattern.width = atoi(argv[3]);
|
||||
chessboard_square_length = static_cast<float>(atof(argv[4]));
|
||||
|
||||
if (argc > 5)
|
||||
{
|
||||
depth_threshold = static_cast<uint16_t>(atoi(argv[5]));
|
||||
if (argc > 6)
|
||||
{
|
||||
color_exposure_usec = atoi(argv[6]);
|
||||
if (argc > 7)
|
||||
{
|
||||
powerline_freq = atoi(argv[7]);
|
||||
if (argc > 8)
|
||||
{
|
||||
calibration_timeout = atof(argv[8]);
|
||||
if (argc > 9)
|
||||
{
|
||||
greenscreen_duration = atof(argv[9]);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (num_devices != 2 && num_devices != 1)
|
||||
{
|
||||
throw std::runtime_error("Invalid choice for number of devices!");
|
||||
}
|
||||
else if (num_devices == 2)
|
||||
{
|
||||
device_indices.emplace_back(1); // now device indices are { 0, 1 }
|
||||
}
|
||||
if (chessboard_pattern.height == 0)
|
||||
{
|
||||
throw std::runtime_error("Chessboard height is not properly set!");
|
||||
}
|
||||
if (chessboard_pattern.width == 0)
|
||||
{
|
||||
throw std::runtime_error("Chessboard height is not properly set!");
|
||||
}
|
||||
if (chessboard_square_length == 0.)
|
||||
{
|
||||
throw std::runtime_error("Chessboard square size is not properly set!");
|
||||
}
|
||||
|
||||
cout << "Chessboard height: " << chessboard_pattern.height << ". Chessboard width: " << chessboard_pattern.width
|
||||
<< ". Chessboard square length: " << chessboard_square_length << endl;
|
||||
cout << "Depth threshold: : " << depth_threshold << ". Color exposure time: " << color_exposure_usec
|
||||
<< ". Powerline frequency mode: " << powerline_freq << endl;
|
||||
|
||||
MultiDeviceCapturer capturer(device_indices, color_exposure_usec, powerline_freq);
|
||||
|
||||
// Create configurations for devices
|
||||
k4a_device_configuration_t main_config = get_main_config();
|
||||
if (num_devices == 1) // no need to have a master cable if it's standalone
|
||||
{
|
||||
main_config.wired_sync_mode = K4A_WIRED_SYNC_MODE_STANDALONE;
|
||||
}
|
||||
k4a_device_configuration_t secondary_config = get_secondary_config();
|
||||
|
||||
// Construct all the things that we'll need whether or not we are running with 1 or 2 cameras
|
||||
k4a::calibration main_calibration = capturer.get_master_device().get_calibration(main_config.depth_mode,
|
||||
main_config.color_resolution);
|
||||
|
||||
// Set up a transformation. DO THIS OUTSIDE OF YOUR MAIN LOOP! Constructing transformations involves time-intensive
|
||||
// hardware setup and should not change once you have a rigid setup, so only call it once or it will run very
|
||||
// slowly.
|
||||
k4a::transformation main_depth_to_main_color(main_calibration);
|
||||
|
||||
capturer.start_devices(main_config, secondary_config);
|
||||
// get an image to be the background
|
||||
vector<k4a::capture> background_captures = capturer.get_synchronized_captures(secondary_config);
|
||||
cv::Mat background_image = color_to_opencv(background_captures[0].get_color_image());
|
||||
cv::Mat output_image = background_image.clone(); // allocated outside the loop to avoid re-creating every time
|
||||
|
||||
if (num_devices == 1)
|
||||
{
|
||||
std::chrono::time_point<std::chrono::system_clock> start_time = std::chrono::system_clock::now();
|
||||
while (std::chrono::duration<double>(std::chrono::system_clock::now() - start_time).count() <
|
||||
greenscreen_duration)
|
||||
{
|
||||
vector<k4a::capture> captures;
|
||||
// secondary_config isn't actually used here because there's no secondary device but the function needs it
|
||||
captures = capturer.get_synchronized_captures(secondary_config, true);
|
||||
k4a::image main_color_image = captures[0].get_color_image();
|
||||
k4a::image main_depth_image = captures[0].get_depth_image();
|
||||
|
||||
// let's green screen out things that are far away.
|
||||
// first: let's get the main depth image into the color camera space
|
||||
k4a::image main_depth_in_main_color = create_depth_image_like(main_color_image);
|
||||
main_depth_to_main_color.depth_image_to_color_camera(main_depth_image, &main_depth_in_main_color);
|
||||
cv::Mat cv_main_depth_in_main_color = depth_to_opencv(main_depth_in_main_color);
|
||||
cv::Mat cv_main_color_image = color_to_opencv(main_color_image);
|
||||
|
||||
// single-camera case
|
||||
cv::Mat within_threshold_range = (cv_main_depth_in_main_color != 0) &
|
||||
(cv_main_depth_in_main_color < depth_threshold);
|
||||
// show the close details
|
||||
cv_main_color_image.copyTo(output_image, within_threshold_range);
|
||||
// hide the rest with the background image
|
||||
background_image.copyTo(output_image, ~within_threshold_range);
|
||||
|
||||
cv::imshow("Green Screen", output_image);
|
||||
cv::waitKey(1);
|
||||
}
|
||||
}
|
||||
else if (num_devices == 2)
|
||||
{
|
||||
// This wraps all the device-to-device details
|
||||
Transformation tr_secondary_color_to_main_color = calibrate_devices(capturer,
|
||||
main_config,
|
||||
secondary_config,
|
||||
chessboard_pattern,
|
||||
chessboard_square_length,
|
||||
calibration_timeout);
|
||||
|
||||
k4a::calibration secondary_calibration =
|
||||
capturer.get_subordinate_device_by_index(0).get_calibration(secondary_config.depth_mode,
|
||||
secondary_config.color_resolution);
|
||||
// Get the transformation from secondary depth to secondary color using its calibration object
|
||||
Transformation tr_secondary_depth_to_secondary_color = get_depth_to_color_transformation_from_calibration(
|
||||
secondary_calibration);
|
||||
|
||||
// We now have the secondary depth to secondary color transform. We also have the transformation from the
|
||||
// secondary color perspective to the main color perspective from the calibration earlier. Now let's compose the
|
||||
// depth secondary -> color secondary, color secondary -> color main into depth secondary -> color main
|
||||
Transformation tr_secondary_depth_to_main_color = tr_secondary_depth_to_secondary_color.compose_with(
|
||||
tr_secondary_color_to_main_color);
|
||||
|
||||
// Construct a new calibration object to transform from the secondary depth camera to the main color camera
|
||||
k4a::calibration secondary_depth_to_main_color_cal =
|
||||
construct_device_to_device_calibration(main_calibration,
|
||||
secondary_calibration,
|
||||
tr_secondary_depth_to_main_color);
|
||||
k4a::transformation secondary_depth_to_main_color(secondary_depth_to_main_color_cal);
|
||||
|
||||
std::chrono::time_point<std::chrono::system_clock> start_time = std::chrono::system_clock::now();
|
||||
while (std::chrono::duration<double>(std::chrono::system_clock::now() - start_time).count() <
|
||||
greenscreen_duration)
|
||||
{
|
||||
vector<k4a::capture> captures;
|
||||
captures = capturer.get_synchronized_captures(secondary_config, true);
|
||||
k4a::image main_color_image = captures[0].get_color_image();
|
||||
k4a::image main_depth_image = captures[0].get_depth_image();
|
||||
|
||||
// let's green screen out things that are far away.
|
||||
// first: let's get the main depth image into the color camera space
|
||||
k4a::image main_depth_in_main_color = create_depth_image_like(main_color_image);
|
||||
main_depth_to_main_color.depth_image_to_color_camera(main_depth_image, &main_depth_in_main_color);
|
||||
cv::Mat cv_main_depth_in_main_color = depth_to_opencv(main_depth_in_main_color);
|
||||
cv::Mat cv_main_color_image = color_to_opencv(main_color_image);
|
||||
|
||||
k4a::image secondary_depth_image = captures[1].get_depth_image();
|
||||
|
||||
// Get the depth image in the main color perspective
|
||||
k4a::image secondary_depth_in_main_color = create_depth_image_like(main_color_image);
|
||||
secondary_depth_to_main_color.depth_image_to_color_camera(secondary_depth_image,
|
||||
&secondary_depth_in_main_color);
|
||||
cv::Mat cv_secondary_depth_in_main_color = depth_to_opencv(secondary_depth_in_main_color);
|
||||
|
||||
// Now it's time to actually construct the green screen. Where the depth is 0, the camera doesn't know how
|
||||
// far away the object is because it didn't get a response at that point. That's where we'll try to fill in
|
||||
// the gaps with the other camera.
|
||||
cv::Mat main_valid_mask = cv_main_depth_in_main_color != 0;
|
||||
cv::Mat secondary_valid_mask = cv_secondary_depth_in_main_color != 0;
|
||||
// build depth mask. If the main camera depth for a pixel is valid and the depth is within the threshold,
|
||||
// then set the mask to display that pixel. If the main camera depth for a pixel is invalid but the
|
||||
// secondary depth for a pixel is valid and within the threshold, then set the mask to display that pixel.
|
||||
cv::Mat within_threshold_range = (main_valid_mask & (cv_main_depth_in_main_color < depth_threshold)) |
|
||||
(~main_valid_mask & secondary_valid_mask &
|
||||
(cv_secondary_depth_in_main_color < depth_threshold));
|
||||
// copy main color image to output image only where the mask within_threshold_range is true
|
||||
cv_main_color_image.copyTo(output_image, within_threshold_range);
|
||||
// fill the rest with the background image
|
||||
background_image.copyTo(output_image, ~within_threshold_range);
|
||||
|
||||
cv::imshow("Green Screen", output_image);
|
||||
cv::waitKey(1);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
throw std::runtime_error("Invalid number of devices!");
|
||||
}
|
||||
}
|
||||
|
||||
static cv::Mat color_to_opencv(const k4a::image &im)
|
||||
{
|
||||
cv::Mat cv_image_with_alpha(im.get_height_pixels(), im.get_width_pixels(), CV_8UC4, (void *)im.get_buffer());
|
||||
cv::Mat cv_image_no_alpha;
|
||||
cv::cvtColor(cv_image_with_alpha, cv_image_no_alpha, cv::COLOR_BGRA2BGR);
|
||||
return cv_image_no_alpha;
|
||||
}
|
||||
|
||||
static cv::Mat depth_to_opencv(const k4a::image &im)
|
||||
{
|
||||
return cv::Mat(im.get_height_pixels(),
|
||||
im.get_width_pixels(),
|
||||
CV_16U,
|
||||
(void *)im.get_buffer(),
|
||||
static_cast<size_t>(im.get_stride_bytes()));
|
||||
}
|
||||
|
||||
static cv::Matx33f calibration_to_color_camera_matrix(const k4a::calibration &cal)
|
||||
{
|
||||
const k4a_calibration_intrinsic_parameters_t::_param &i = cal.color_camera_calibration.intrinsics.parameters.param;
|
||||
cv::Matx33f camera_matrix = cv::Matx33f::zeros();
|
||||
camera_matrix(0, 0) = i.fx;
|
||||
camera_matrix(1, 1) = i.fx;
|
||||
camera_matrix(0, 2) = i.cx;
|
||||
camera_matrix(1, 2) = i.cy;
|
||||
camera_matrix(2, 2) = 1;
|
||||
return camera_matrix;
|
||||
}
|
||||
|
||||
static Transformation get_depth_to_color_transformation_from_calibration(const k4a::calibration &cal)
|
||||
{
|
||||
const k4a_calibration_extrinsics_t &ex = cal.extrinsics[K4A_CALIBRATION_TYPE_DEPTH][K4A_CALIBRATION_TYPE_COLOR];
|
||||
Transformation tr;
|
||||
for (int i = 0; i < 3; ++i)
|
||||
{
|
||||
for (int j = 0; j < 3; ++j)
|
||||
{
|
||||
tr.R(i, j) = ex.rotation[i * 3 + j];
|
||||
}
|
||||
}
|
||||
tr.t = cv::Vec3d(ex.translation[0], ex.translation[1], ex.translation[2]);
|
||||
return tr;
|
||||
}
|
||||
|
||||
// This function constructs a calibration that operates as a transformation between the secondary device's depth camera
|
||||
// and the main camera's color camera. IT WILL NOT GENERALIZE TO OTHER TRANSFORMS. Under the hood, the transformation
|
||||
// depth_image_to_color_camera method can be thought of as converting each depth pixel to a 3d point using the
|
||||
// intrinsics of the depth camera, then using the calibration's extrinsics to convert between depth and color, then
|
||||
// using the color intrinsics to produce the point in the color camera perspective.
|
||||
static k4a::calibration construct_device_to_device_calibration(const k4a::calibration &main_cal,
|
||||
const k4a::calibration &secondary_cal,
|
||||
const Transformation &secondary_to_main)
|
||||
{
|
||||
k4a::calibration cal = secondary_cal;
|
||||
k4a_calibration_extrinsics_t &ex = cal.extrinsics[K4A_CALIBRATION_TYPE_DEPTH][K4A_CALIBRATION_TYPE_COLOR];
|
||||
for (int i = 0; i < 3; ++i)
|
||||
{
|
||||
for (int j = 0; j < 3; ++j)
|
||||
{
|
||||
ex.rotation[i * 3 + j] = static_cast<float>(secondary_to_main.R(i, j));
|
||||
}
|
||||
}
|
||||
for (int i = 0; i < 3; ++i)
|
||||
{
|
||||
ex.translation[i] = static_cast<float>(secondary_to_main.t[i]);
|
||||
}
|
||||
cal.color_camera_calibration = main_cal.color_camera_calibration;
|
||||
return cal;
|
||||
}
|
||||
|
||||
static vector<float> calibration_to_color_camera_dist_coeffs(const k4a::calibration &cal)
|
||||
{
|
||||
const k4a_calibration_intrinsic_parameters_t::_param &i = cal.color_camera_calibration.intrinsics.parameters.param;
|
||||
return { i.k1, i.k2, i.p1, i.p2, i.k3, i.k4, i.k5, i.k6 };
|
||||
}
|
||||
|
||||
bool find_chessboard_corners_helper(const cv::Mat &main_color_image,
|
||||
const cv::Mat &secondary_color_image,
|
||||
const cv::Size &chessboard_pattern,
|
||||
vector<cv::Point2f> &main_chessboard_corners,
|
||||
vector<cv::Point2f> &secondary_chessboard_corners)
|
||||
{
|
||||
bool found_chessboard_main = cv::findChessboardCorners(main_color_image,
|
||||
chessboard_pattern,
|
||||
main_chessboard_corners);
|
||||
bool found_chessboard_secondary = cv::findChessboardCorners(secondary_color_image,
|
||||
chessboard_pattern,
|
||||
secondary_chessboard_corners);
|
||||
|
||||
// Cover the failure cases where chessboards were not found in one or both images.
|
||||
if (!found_chessboard_main || !found_chessboard_secondary)
|
||||
{
|
||||
if (found_chessboard_main)
|
||||
{
|
||||
cout << "Could not find the chessboard corners in the secondary image. Trying again...\n";
|
||||
}
|
||||
// Likewise, if the chessboard was found in the secondary image, it was not found in the main image.
|
||||
else if (found_chessboard_secondary)
|
||||
{
|
||||
cout << "Could not find the chessboard corners in the main image. Trying again...\n";
|
||||
}
|
||||
// The only remaining case is the corners were in neither image.
|
||||
else
|
||||
{
|
||||
cout << "Could not find the chessboard corners in either image. Trying again...\n";
|
||||
}
|
||||
return false;
|
||||
}
|
||||
// Before we go on, there's a quick problem with calibration to address. Because the chessboard looks the same when
|
||||
// rotated 180 degrees, it is possible that the chessboard corner finder may find the correct points, but in the
|
||||
// wrong order.
|
||||
|
||||
// A visual:
|
||||
// Image 1 Image 2
|
||||
// ..................... .....................
|
||||
// ..................... .....................
|
||||
// .........xxxxx2...... .....xxxxx1..........
|
||||
// .........xxxxxx...... .....xxxxxx..........
|
||||
// .........xxxxxx...... .....xxxxxx..........
|
||||
// .........1xxxxx...... .....2xxxxx..........
|
||||
// ..................... .....................
|
||||
// ..................... .....................
|
||||
|
||||
// The problem occurs when this case happens: the find_chessboard() function correctly identifies the points on the
|
||||
// chessboard (shown as 'x's) but the order of those points differs between images taken by the two cameras.
|
||||
// Specifically, the first point in the list of points found for the first image (1) is the *last* point in the list
|
||||
// of points found for the second image (2), though they correspond to the same physical point on the chessboard.
|
||||
|
||||
// To avoid this problem, we can make the assumption that both of the cameras will be oriented in a similar manner
|
||||
// (e.g. turning one of the cameras upside down will break this assumption) and enforce that the vector between the
|
||||
// first and last points found in pixel space (which will be at opposite ends of the chessboard) are pointing the
|
||||
// same direction- so, the dot product of the two vectors is positive.
|
||||
|
||||
cv::Vec2f main_image_corners_vec = main_chessboard_corners.back() - main_chessboard_corners.front();
|
||||
cv::Vec2f secondary_image_corners_vec = secondary_chessboard_corners.back() - secondary_chessboard_corners.front();
|
||||
if (main_image_corners_vec.dot(secondary_image_corners_vec) <= 0.0)
|
||||
{
|
||||
std::reverse(secondary_chessboard_corners.begin(), secondary_chessboard_corners.end());
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
Transformation stereo_calibration(const k4a::calibration &main_calib,
|
||||
const k4a::calibration &secondary_calib,
|
||||
const vector<vector<cv::Point2f>> &main_chessboard_corners_list,
|
||||
const vector<vector<cv::Point2f>> &secondary_chessboard_corners_list,
|
||||
const cv::Size &image_size,
|
||||
const cv::Size &chessboard_pattern,
|
||||
float chessboard_square_length)
|
||||
{
|
||||
// We have points in each image that correspond to the corners that the findChessboardCorners function found.
|
||||
// However, we still need the points in 3 dimensions that these points correspond to. Because we are ultimately only
|
||||
// interested in find a transformation between two cameras, these points don't have to correspond to an external
|
||||
// "origin" point. The only important thing is that the relative distances between points are accurate. As a result,
|
||||
// we can simply make the first corresponding point (0, 0) and construct the remaining points based on that one. The
|
||||
// order of points inserted into the vector here matches the ordering of findChessboardCorners. The units of these
|
||||
// points are in millimeters, mostly because the depth provided by the depth cameras is also provided in
|
||||
// millimeters, which makes for easy comparison.
|
||||
vector<cv::Point3f> chessboard_corners_world;
|
||||
for (int h = 0; h < chessboard_pattern.height; ++h)
|
||||
{
|
||||
for (int w = 0; w < chessboard_pattern.width; ++w)
|
||||
{
|
||||
chessboard_corners_world.emplace_back(
|
||||
cv::Point3f{ w * chessboard_square_length, h * chessboard_square_length, 0.0 });
|
||||
}
|
||||
}
|
||||
|
||||
// Calibrating the cameras requires a lot of data. OpenCV's stereoCalibrate function requires:
|
||||
// - a list of points in real 3d space that will be used to calibrate*
|
||||
// - a corresponding list of pixel coordinates as seen by the first camera*
|
||||
// - a corresponding list of pixel coordinates as seen by the second camera*
|
||||
// - the camera matrix of the first camera
|
||||
// - the distortion coefficients of the first camera
|
||||
// - the camera matrix of the second camera
|
||||
// - the distortion coefficients of the second camera
|
||||
// - the size (in pixels) of the images
|
||||
// - R: stereoCalibrate stores the rotation matrix from the first camera to the second here
|
||||
// - t: stereoCalibrate stores the translation vector from the first camera to the second here
|
||||
// - E: stereoCalibrate stores the essential matrix here (we don't use this)
|
||||
// - F: stereoCalibrate stores the fundamental matrix here (we don't use this)
|
||||
//
|
||||
// * note: OpenCV's stereoCalibrate actually requires as input an array of arrays of points for these arguments,
|
||||
// allowing a caller to provide multiple frames from the same camera with corresponding points. For example, if
|
||||
// extremely high precision was required, many images could be taken with each camera, and findChessboardCorners
|
||||
// applied to each of those images, and OpenCV can jointly solve for all of the pairs of corresponding images.
|
||||
// However, to keep things simple, we use only one image from each device to calibrate. This is also why each of
|
||||
// the vectors of corners is placed into another vector.
|
||||
//
|
||||
// A function in OpenCV's calibration function also requires that these points be F32 types, so we use those.
|
||||
// However, OpenCV still provides doubles as output, strangely enough.
|
||||
vector<vector<cv::Point3f>> chessboard_corners_world_nested_for_cv(main_chessboard_corners_list.size(),
|
||||
chessboard_corners_world);
|
||||
|
||||
cv::Matx33f main_camera_matrix = calibration_to_color_camera_matrix(main_calib);
|
||||
cv::Matx33f secondary_camera_matrix = calibration_to_color_camera_matrix(secondary_calib);
|
||||
vector<float> main_dist_coeff = calibration_to_color_camera_dist_coeffs(main_calib);
|
||||
vector<float> secondary_dist_coeff = calibration_to_color_camera_dist_coeffs(secondary_calib);
|
||||
|
||||
// Finally, we'll actually calibrate the cameras.
|
||||
// Pass secondary first, then main, because we want a transform from secondary to main.
|
||||
Transformation tr;
|
||||
double error = cv::stereoCalibrate(chessboard_corners_world_nested_for_cv,
|
||||
secondary_chessboard_corners_list,
|
||||
main_chessboard_corners_list,
|
||||
secondary_camera_matrix,
|
||||
secondary_dist_coeff,
|
||||
main_camera_matrix,
|
||||
main_dist_coeff,
|
||||
image_size,
|
||||
tr.R, // output
|
||||
tr.t, // output
|
||||
cv::noArray(),
|
||||
cv::noArray(),
|
||||
cv::CALIB_FIX_INTRINSIC | cv::CALIB_RATIONAL_MODEL | cv::CALIB_CB_FAST_CHECK);
|
||||
cout << "Finished calibrating!\n";
|
||||
cout << "Got error of " << error << "\n";
|
||||
return tr;
|
||||
}
|
||||
|
||||
// The following functions provide the configurations that should be used for each camera.
|
||||
// NOTE: Both cameras must have the same configuration (framerate, resolution, color and depth modes TODO what exactly
|
||||
// needs to be the same
|
||||
static k4a_device_configuration_t get_main_config()
|
||||
{
|
||||
k4a_device_configuration_t camera_config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
|
||||
camera_config.color_format = K4A_IMAGE_FORMAT_COLOR_BGRA32;
|
||||
camera_config.color_resolution = K4A_COLOR_RESOLUTION_720P;
|
||||
camera_config.depth_mode = K4A_DEPTH_MODE_WFOV_UNBINNED; // no need for depth during calibration
|
||||
camera_config.camera_fps = K4A_FRAMES_PER_SECOND_15;
|
||||
camera_config.wired_sync_mode = K4A_WIRED_SYNC_MODE_MASTER; // main device is master in two-camera case. It will be
|
||||
// changed later on if it's a single-camera case
|
||||
camera_config.subordinate_delay_off_master_usec = 0; // Must be zero- this device is the master, so delay is 0.
|
||||
// Let half of the time needed for the depth cameras to not interfere with one another pass here (the other half is
|
||||
// in the master to subordinate delay)
|
||||
camera_config.depth_delay_off_color_usec = -static_cast<int32_t>(MIN_TIME_BETWEEN_DEPTH_CAMERA_PICTURES_USEC / 2);
|
||||
camera_config.synchronized_images_only = true;
|
||||
return camera_config;
|
||||
}
|
||||
|
||||
static k4a_device_configuration_t get_secondary_config()
|
||||
{
|
||||
k4a_device_configuration_t camera_config = get_main_config();
|
||||
// The color camera must be running for synchronization to work, even though we don't really use it
|
||||
camera_config.wired_sync_mode = K4A_WIRED_SYNC_MODE_SUBORDINATE; // secondary will be the subordinate
|
||||
camera_config.subordinate_delay_off_master_usec = 0; // sync up the color cameras to help calibration
|
||||
// Only account for half of the delay here. The other half comes from the master depth camera capturing before the
|
||||
// master color camera.
|
||||
camera_config.depth_delay_off_color_usec = MIN_TIME_BETWEEN_DEPTH_CAMERA_PICTURES_USEC / 2;
|
||||
camera_config.synchronized_images_only = true;
|
||||
return camera_config;
|
||||
}
|
||||
|
||||
static Transformation calibrate_devices(MultiDeviceCapturer &capturer,
|
||||
const k4a_device_configuration_t &main_config,
|
||||
const k4a_device_configuration_t &secondary_config,
|
||||
const cv::Size &chessboard_pattern,
|
||||
float chessboard_square_length,
|
||||
double calibration_timeout)
|
||||
{
|
||||
k4a::calibration main_calibration = capturer.get_master_device().get_calibration(main_config.depth_mode,
|
||||
main_config.color_resolution);
|
||||
|
||||
k4a::calibration secondary_calibration =
|
||||
capturer.get_subordinate_device_by_index(0).get_calibration(secondary_config.depth_mode,
|
||||
secondary_config.color_resolution);
|
||||
vector<vector<cv::Point2f>> main_chessboard_corners_list;
|
||||
vector<vector<cv::Point2f>> secondary_chessboard_corners_list;
|
||||
std::chrono::time_point<std::chrono::system_clock> start_time = std::chrono::system_clock::now();
|
||||
while (std::chrono::duration<double>(std::chrono::system_clock::now() - start_time).count() < calibration_timeout)
|
||||
{
|
||||
vector<k4a::capture> captures = capturer.get_synchronized_captures(secondary_config);
|
||||
k4a::capture &main_capture = captures[0];
|
||||
k4a::capture &secondary_capture = captures[1];
|
||||
// get_color_image is guaranteed to be non-null because we use get_synchronized_captures for color
|
||||
// (get_synchronized_captures also offers a flag to use depth for the secondary camera instead of color).
|
||||
k4a::image main_color_image = main_capture.get_color_image();
|
||||
k4a::image secondary_color_image = secondary_capture.get_color_image();
|
||||
cv::Mat cv_main_color_image = color_to_opencv(main_color_image);
|
||||
cv::Mat cv_secondary_color_image = color_to_opencv(secondary_color_image);
|
||||
|
||||
vector<cv::Point2f> main_chessboard_corners;
|
||||
vector<cv::Point2f> secondary_chessboard_corners;
|
||||
bool got_corners = find_chessboard_corners_helper(cv_main_color_image,
|
||||
cv_secondary_color_image,
|
||||
chessboard_pattern,
|
||||
main_chessboard_corners,
|
||||
secondary_chessboard_corners);
|
||||
if (got_corners)
|
||||
{
|
||||
main_chessboard_corners_list.emplace_back(main_chessboard_corners);
|
||||
secondary_chessboard_corners_list.emplace_back(secondary_chessboard_corners);
|
||||
cv::drawChessboardCorners(cv_main_color_image, chessboard_pattern, main_chessboard_corners, true);
|
||||
cv::drawChessboardCorners(cv_secondary_color_image, chessboard_pattern, secondary_chessboard_corners, true);
|
||||
}
|
||||
|
||||
cv::imshow("Chessboard view from main camera", cv_main_color_image);
|
||||
cv::waitKey(1);
|
||||
cv::imshow("Chessboard view from secondary camera", cv_secondary_color_image);
|
||||
cv::waitKey(1);
|
||||
|
||||
// Get 20 frames before doing calibration.
|
||||
if (main_chessboard_corners_list.size() >= 20)
|
||||
{
|
||||
cout << "Calculating calibration..." << endl;
|
||||
return stereo_calibration(main_calibration,
|
||||
secondary_calibration,
|
||||
main_chessboard_corners_list,
|
||||
secondary_chessboard_corners_list,
|
||||
cv_main_color_image.size(),
|
||||
chessboard_pattern,
|
||||
chessboard_square_length);
|
||||
}
|
||||
}
|
||||
throw std::runtime_error("Calibration timed out!");
|
||||
}
|
||||
|
||||
static k4a::image create_depth_image_like(const k4a::image &im)
|
||||
{
|
||||
return k4a::image::create(K4A_IMAGE_FORMAT_DEPTH16,
|
||||
im.get_width_pixels(),
|
||||
im.get_height_pixels(),
|
||||
im.get_width_pixels() * static_cast<int>(sizeof(uint16_t)));
|
||||
}
|
|
@ -0,0 +1,55 @@
|
|||
// Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
// Licensed under the MIT License.
|
||||
#pragma once
|
||||
|
||||
#include <opencv2/core.hpp>
|
||||
|
||||
struct Transformation
|
||||
{
|
||||
cv::Matx33d R;
|
||||
cv::Vec3d t;
|
||||
|
||||
// Construct an identity transformation.
|
||||
Transformation() : R(cv::Matx33d::eye()), t(0., 0., 0.) {}
|
||||
|
||||
// Construct from H
|
||||
Transformation(const cv::Matx44d &H) : R(H.get_minor<3, 3>(0, 0)), t(H(0, 3), H(1, 3), H(2, 3)) {}
|
||||
|
||||
// Create homogeneous matrix from this transformation
|
||||
cv::Matx44d to_homogeneous() const
|
||||
{
|
||||
return cv::Matx44d(
|
||||
// row 1
|
||||
R(0, 0),
|
||||
R(0, 1),
|
||||
R(0, 2),
|
||||
t(0),
|
||||
// row 2
|
||||
R(1, 0),
|
||||
R(1, 1),
|
||||
R(1, 2),
|
||||
t(1),
|
||||
// row 3
|
||||
R(2, 0),
|
||||
R(2, 1),
|
||||
R(2, 2),
|
||||
t(2),
|
||||
// row 4
|
||||
0,
|
||||
0,
|
||||
0,
|
||||
1);
|
||||
}
|
||||
|
||||
// Construct a transformation equivalent to this transformation followed by the second transformation
|
||||
Transformation compose_with(const Transformation &second_transformation) const
|
||||
{
|
||||
// get this transform
|
||||
cv::Matx44d H_1 = to_homogeneous();
|
||||
// get the transform to be composed with this one
|
||||
cv::Matx44d H_2 = second_transformation.to_homogeneous();
|
||||
// get the combined transform
|
||||
cv::Matx44d H_3 = H_1 * H_2;
|
||||
return Transformation(H_3);
|
||||
}
|
||||
};
|
|
@ -1,8 +1,12 @@
|
|||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License.
|
||||
|
||||
add_executable(opencv_example
|
||||
main.cpp)
|
||||
add_executable(opencv_example main.cpp)
|
||||
target_include_directories( opencv_example PRIVATE ${OpenCV_INCLUDE_DIRS} )
|
||||
|
||||
target_link_libraries(opencv_example PRIVATE
|
||||
k4a::k4a)
|
||||
# OpenCV_LIBS, by default, is picking up the debug version of opencv on Windows even in release mode, which was causing a dependency on non-redistributable Visual Studio dlls.
|
||||
if (${CMAKE_SYSTEM_NAME} STREQUAL "Windows")
|
||||
target_link_libraries(opencv_example PRIVATE k4a::k4a ${OpenCV_DIR}/Opencv_world320.lib)
|
||||
else()
|
||||
target_link_libraries(opencv_example PRIVATE k4a::k4a ${OpenCV_LIBS})
|
||||
endif()
|
||||
|
|
|
@ -8,12 +8,9 @@
|
|||
#include <vector>
|
||||
using namespace std;
|
||||
|
||||
// #define HAVE_OPENCV
|
||||
#ifdef HAVE_OPENCV
|
||||
#include "opencv2/core.hpp"
|
||||
#include "opencv2/calib3d.hpp"
|
||||
using namespace cv;
|
||||
#endif
|
||||
|
||||
static void clean_up(k4a_device_t device)
|
||||
{
|
||||
|
@ -82,7 +79,6 @@ int main(int argc, char ** /*argv*/)
|
|||
&valid);
|
||||
}
|
||||
|
||||
#ifdef HAVE_OPENCV
|
||||
// converting the calibration data to OpenCV format
|
||||
// extrinsic transformation from color to depth camera
|
||||
Mat se3 =
|
||||
|
@ -105,7 +101,12 @@ int main(int argc, char ** /*argv*/)
|
|||
|
||||
// OpenCV project function
|
||||
vector<Point2f> cv_points_2d(points_3d.size());
|
||||
projectPoints(*(vector<Point3f> *)&points_3d, r_vec, t_vec, camera_matrix, dist_coeffs, cv_points_2d);
|
||||
projectPoints(*reinterpret_cast<vector<Point3f> *>(&points_3d),
|
||||
r_vec,
|
||||
t_vec,
|
||||
camera_matrix,
|
||||
dist_coeffs,
|
||||
cv_points_2d);
|
||||
|
||||
for (size_t i = 0; i < points_3d.size(); i++)
|
||||
{
|
||||
|
@ -113,8 +114,7 @@ int main(int argc, char ** /*argv*/)
|
|||
printf("OpenCV projectPoints:\t\t(%.5f, %.5f)\n", cv_points_2d[i].x, cv_points_2d[i].y);
|
||||
printf("k4a_calibration_3d_to_2d:\t(%.5f, %.5f)\n\n", k4a_points_2d[i].v[0], k4a_points_2d[i].v[1]);
|
||||
}
|
||||
#endif
|
||||
|
||||
clean_up(device);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,39 @@
|
|||
$ErrorActionPreference = "Stop"
|
||||
|
||||
function Download-ToTemp
|
||||
{
|
||||
param
|
||||
(
|
||||
[Parameter(Mandatory)]
|
||||
[string] $url,
|
||||
[string] $filename
|
||||
)
|
||||
|
||||
if (-not ($filename))
|
||||
{
|
||||
$filename = Split-Path -Path $url -Leaf
|
||||
}
|
||||
|
||||
if (-not ($filename))
|
||||
{
|
||||
Write-Error "Unable to parse filename from $url"
|
||||
return $null
|
||||
}
|
||||
|
||||
$tempDir = [System.IO.Path]::GetTempPath()
|
||||
$path = Join-Path -Path $tempDir -ChildPath $filename
|
||||
|
||||
Write-Host -NoNewline "Downloading $url to $path..."
|
||||
Invoke-WebRequest -Uri $url -OutFile $path -UserAgent "NativeClient" -MaximumRetryCount 5 -RetryIntervalSec 60
|
||||
Write-Host "Done"
|
||||
|
||||
return $path
|
||||
}
|
||||
|
||||
# Download OpenCV
|
||||
$url = "https://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.2.0/opencv-3.2.0-vc14.exe/download"
|
||||
$filename = "opencv-3.2.0-vc14.exe"
|
||||
$opencv_exe = Download-ToTemp -url $url -filename $filename
|
||||
|
||||
Start-Process -Wait $opencv_exe -ArgumentList -o"C:\",-y
|
||||
Write-Host "OpenCV installed."
|
|
@ -3,7 +3,13 @@
|
|||
|
||||
include(k4aTest)
|
||||
|
||||
# Only run green screen tests if OpenCV is installed
|
||||
find_package(OpenCV)
|
||||
add_executable(executables_ft executables_ft.cpp)
|
||||
if (OpenCV_FOUND)
|
||||
target_compile_definitions(executables_ft PRIVATE -DUSE_OPENCV=1 )
|
||||
endif()
|
||||
|
||||
target_link_libraries(executables_ft PRIVATE
|
||||
k4a::k4a
|
||||
k4ainternal::utcommon
|
||||
|
|
|
@ -38,7 +38,7 @@ static int run_and_record_executable(std::string shell_command_path, std::string
|
|||
std::string formatted_command = shell_command_path;
|
||||
if (!output_path.empty())
|
||||
{
|
||||
formatted_command += " 2>&1 > " + output_path;
|
||||
formatted_command += " > " + output_path + " 2>&1";
|
||||
}
|
||||
// In Linux, forking a process causes the under buffers to be forked, too. So, because popen uses fork under the
|
||||
// hood, there may have been a risk of printing something in both processes. I'm not sure if this could happen in
|
||||
|
@ -47,10 +47,25 @@ static int run_and_record_executable(std::string shell_command_path, std::string
|
|||
FILE *process_stream = POPEN(formatted_command.c_str(), "r");
|
||||
if (!process_stream)
|
||||
{
|
||||
printf("process_stream is NULL\n");
|
||||
std::cout << "process_stream is NULL" << std::endl;
|
||||
return EXIT_FAILURE; // if popen fails, it returns null, which is an error
|
||||
}
|
||||
return PCLOSE(process_stream);
|
||||
int return_code = PCLOSE(process_stream);
|
||||
std::cout << "Ran: " << formatted_command << std::endl;
|
||||
std::cout << "<==============================================" << std::endl;
|
||||
try
|
||||
{
|
||||
if (!output_path.empty())
|
||||
{
|
||||
std::cout << std::ifstream(output_path).rdbuf() << std::endl;
|
||||
}
|
||||
}
|
||||
catch (std::exception &e)
|
||||
{
|
||||
std::cout << "Dumping log file threw a std::exception: " << e.what() << std::endl;
|
||||
}
|
||||
std::cout << "==============================================>" << std::endl;
|
||||
return return_code;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -155,6 +170,27 @@ TEST_F(executables_ft, enumerate)
|
|||
test_stream_against_regexes(&results, ®exes);
|
||||
}
|
||||
|
||||
#ifdef USE_OPENCV
|
||||
TEST_F(executables_ft, green_screen_single_cam)
|
||||
{
|
||||
const std::string green_screen_path = PATH_TO_BIN("green_screen");
|
||||
const std::string green_screen_out = TEST_TEMP_DIR + "/green_screen-single-out.txt";
|
||||
ASSERT_EQ(run_and_record_executable(green_screen_path + " 1 7 5 21 1000 4000 2 30 5", green_screen_out),
|
||||
EXIT_SUCCESS);
|
||||
}
|
||||
|
||||
TEST_F(executables_ft, green_screen_double_cam)
|
||||
{
|
||||
const std::string green_screen_path = PATH_TO_BIN("green_screen");
|
||||
const std::string green_screen_out = TEST_TEMP_DIR + "/green_screen-double-out.txt";
|
||||
ASSERT_EQ(run_and_record_executable(green_screen_path + " 2 7 5 21 1000 4000 2 30 5", green_screen_out),
|
||||
EXIT_SUCCESS);
|
||||
std::ifstream results(green_screen_out.c_str());
|
||||
std::vector<std::string> regexes{ "Finished calibrating!" };
|
||||
test_stream_against_regexes(&results, ®exes);
|
||||
}
|
||||
#endif
|
||||
|
||||
TEST_F(executables_ft, fastpointcloud)
|
||||
{
|
||||
const std::string fastpoint_path = PATH_TO_BIN("fastpointcloud");
|
||||
|
@ -177,12 +213,22 @@ TEST_F(executables_ft, fastpointcloud)
|
|||
test_stream_against_regexes(&fastpointcloud_results, ®exes);
|
||||
}
|
||||
|
||||
#ifdef USE_OPENCV
|
||||
TEST_F(executables_ft, opencv_compatibility)
|
||||
{
|
||||
const std::string transformation_dir = TEST_TEMP_DIR;
|
||||
const std::string transformation_path = PATH_TO_BIN("opencv_example");
|
||||
ASSERT_EQ(run_and_record_executable(transformation_path, ""), EXIT_SUCCESS);
|
||||
const std::string opencv_dir = TEST_TEMP_DIR;
|
||||
const std::string opencv_path = PATH_TO_BIN("opencv_example");
|
||||
const std::string opencv_out = TEST_TEMP_DIR + "/opencv-out.txt";
|
||||
ASSERT_EQ(run_and_record_executable(opencv_path, opencv_out), EXIT_SUCCESS);
|
||||
|
||||
std::ifstream opencv_results(opencv_out);
|
||||
ASSERT_TRUE(opencv_results.good());
|
||||
|
||||
std::vector<std::string> regexes{ "3d point:.*", "OpenCV projectPoints:.*", "k4a_calibration_3d_to_2d:.*" };
|
||||
|
||||
test_stream_against_regexes(&opencv_results, ®exes);
|
||||
}
|
||||
#endif
|
||||
|
||||
TEST_F(executables_ft, streaming)
|
||||
{
|
||||
|
|
Загрузка…
Ссылка в новой задаче