10 Home
NaveenThum редактировал(а) эту страницу 2022-07-20 06:05:20 -07:00
Этот файл содержит неоднозначные символы Юникода!

Этот файл содержит неоднозначные символы Юникода, которые могут быть перепутаны с другими в текущей локали. Если это намеренно, можете спокойно проигнорировать это предупреждение. Используйте кнопку Экранировать, чтобы подсветить эти символы.

Welcome to the Windows Camera wiki!

Sharing applications

Can Win32 applications use sharing mode, or is it only for UWPs?

The API for sharing apps is cleaner or more obvious in Media Capture (WinRT), which could be used in Win32/desktop applications besides UWPs. Additionally, sharing mode could also be used in Capture Engine (Win32) apps, but not in DirectShow apps.

How can sharing mode be enabled in Capture Engine based API?

As for enabling sharing mode from within Win32, this requires a bit of hoops to reach since the API to enabled shared mode is only available through sensor groups.

You need to enumerate cameras as you normally do, and pass that cameras symbolic name to MFCreateSensorGroup API (MFCreateSensorGroup function (mfidl.h) - Win32 apps | Microsoft Docs), this returns a IMFSensorGroup object.

Within that object is a collection if IMFSensorDevice objects (for a physical camera, it will always be 1). That IMFSensorDevice has a method, SetSensorDeviceMode (IMFSensorDevice::SetSensorDeviceMode (mfidl.h) - Win32 apps | Microsoft Docs). That mode can be set to shared. Then from the IMFSensorGroup object you got earlier, you can call CreateMediaSource (IMFSensorGroup::CreateMediaSource (mfidl.h) - Win32 apps | Microsoft Docs). The resulting IMFMediaSource will be in shared mode and can be passed into Capture Engine (or Source Reader, depending on the API surface youre using).

NOTE: This only works for cameras under the KSCATEGORY_VIDEO_CAMERA (or KSCATEGORY_SENSOR_CAMERA). It will NOT work for legacy cameras that only register under the KSCATEGOR_VIDEO|KSCATEGORY_CAPTURE (which includes all the old tuner/capture cards and legacy webcam with pre-Windows 8 custom camera drivers). Those types of cameras do not have sensor group.

Can sharing mode be used for capture streams (record pin)?

Short answer is that preview pin can be most reliably used, while support for record pin is rather spotty.

If the camera in question exposes the standard MIPI 3 pins (preview, record, and photo), by default preview pin is the only one exposed to sharing mode apps.

If the camera in question is a typical USB camera, which only exposes a single record pin (all USB video pins are exposed as record pins), then that pin is exposed to sharing mode apps.

If the camera in question is a camera that exposes multiple record pins (some USB cameras do this), then the first pin to support RGB frames (not RGB32, but any non-IR/depth subtypes), is exposed to sharing mode apps.

The behavior of this logic can be overridden by the driver, if the driver explicitly declares that record pins can be shared in addition to the preview pin (in the case of MIPI). But virtually no MIPI cameras do this today.

Camera Driver

What is Variable Frame Rate control? Should a camera support it?

Historically USB cameras produced frames at a rate that accounts for prevailing lighting conditions, irrespective of the frame rate being configured at. As an example, even though the media type negotiated at 30fps implies frames getting emitted approx. every 33.33ms, if lighting is poor, to improve noise floor, these cameras could produce frames at a slower pace, e.g., every 50ms. A camera driver could implement KSPROPERTY_CAMERACONTROL_EXTENDED_VFR - Windows drivers | Microsoft Docs to let applications choose between "guaranteed" frame rate suitable for high motion sports scene capture, versus "visually pleasing" frame rate for real time communications. If the driver does not support variable frame rate for video, the driver should not implement this control, and variable frame rate will be implied.

How to report KSProperty/event that driver implements but underlying device does not support?

Driver should be aware of the underlying device not supporting the property/event and report STATUS_NOT_FOUND in that case. Do not attempt to do any tricks in the INF.

Device MFT (DMFT)

Can a Device MFT just expose YUY2 media type?

It is technically it is allowed. However, note that NV12 is the preferred mediatype in Windows camera stack, and should be used to avoid format conversions and unnecessary copying. If the DMFT is exposing only YUY2, then it should not claim to be DX aware, as YUY2 is supported only in system buffers and is unsupported in DirectX path.

Where should DMFT do its processing? ProcessInput or ProcessOutput?

Neither. You should not be doing the processing in ProcessInput or processOutput . If you are using the CAsyncPin in the DMFT sample code, you should do the processing in HRESULT CAsyncInPin::Invoke( In IMFAsyncResult* pResult ).

ProcessInput and ProcessOutput serializes media source operations as they are protected under a single filter lock. So we shouldnt do any CPU / GPU intensive task within those interface functions. ProcessInput should ideally just take the input sample, spawn a thread for any operations to take place in(which AsyncPin does) and ProcessOutput should ideally just do the activity of handing the processed samples back to the pipeline.

The DMFT should be using the workqueue that is given to DMFT, and set deadlines to workitems being put in there. Those hints are used by OS to change CPU clock frequency, wake up cores, increase priority for work items etc.

One can create a serial queue in CAsyncPin to prevent out of order execution, and serialize processing.

MFAllocateSerialWorkQueue(MFASYNC_CALLBACK_QUEUE_MULTITHREADED, &m_dwWorkQueueId); and then DMFTCHECKHR_GOTO(MFPutWorkItem(m_dwWorkQueueId, static_cast<IMFAsyncCallback*>(m_asyncCallback.Get()), pSample), done); should serialize the samples.

How can RGB camera DMFT realize that a Hello session is going on? How can RGB camera realize that a secure Hello is going on?

For some camera subsystems, the RGB camera may be used in Hello scenarios which means the IR camera may be put in secure mode, aka enhanced sign-in security (for Secure BIOS enabled systems). In such situation, shared resources between RGB and IR cameras may need to be disabled for the RGB stream to avoid attempting to access those resources in non-secure manner, as the ISP is fully in secure mode and can't work on clear RGB and secure IR concurrently. Starting with SV2 (Windows 11 22H2), the camera pipeline sets Hello profile on RGB camera, in addition to the IR camera before streaming starts to alert the RGB camera driver. If the system is secure enabled, then all Hello sessions on the system are in secure mode.

Virtual Camera

Can virtual cameras be chained so that effects can be aggregated from two or more implementers?

For chaining effects, DMFT is the supported approach. Chaining virtual cameras is not supported. However, if one effect implementation is a traditional custom media source, then a virtual camera (having its own effects) could instantiate the custom media source, thus chaining the two.

What is the difference between CustomMediaSource vs. VirtualCamera, which one to use?

Virtual camera is a refresh of CustomMediaSource, that makes it easier for ISV to develop camera on windows without the need to ship a driver package. For any new development targeting Windows 11+ (build 22000 on), the recommended solution is Virtual Camera.

Miscellaneous topics