Properties may be set not only in the command line, but also in other sources, like gradle.properties file in user dir or environment variables.
Co-authored-by: Jason Sandlin <jasonsa@microsoft.com>
* Fixed hang in XblCleanupAsync
CurlProvider::PerformAsync can be called simultaneously from multiple threads, which leaves CurlProvider::m_curlMultis in a bad state. That results in m_curlMultis.size() being incorrect, which results in m_cleanupTasksRemaining being incorrect, which results in XAsyncComplete never being called in CurlProvider cleanup.
I protect against this by protecting access to m_curlMultis with m_mutex.
* Revert "Fixed hang in XblCleanupAsync"
This reverts commit 1adc155e4d.
* Fixed hang in XblCleanupAsync
Fixed hang in XblCleanupAsync
1adc155
CurlProvider::PerformAsync can be called simultaneously from multiple threads, which leaves CurlProvider::m_curlMultis in a bad state. That results in m_curlMultis.size() being incorrect, which results in m_cleanupTasksRemaining being incorrect, which results in XAsyncComplete never being called in CurlProvider cleanup.
I protect against this by protecting access to m_curlMultis with m_mutex.
* Update CurlProvider.cpp
Updated previous change based on review comments. Changed Tabs -> Spaces and made a local copy of CurlProvider::m_curlMultis in CurlProvider::CleanupAsyncProvider
Co-authored-by: Jason Sandlin <jasonsa@microsoft.com>
* WinHttp request error not correctly handled when a disconnect isn't acknowledged by the server
* Fixes race condition between WebSocket disconnect event handler invocation and unregistration when closing a WebSocket handle
* HCWebSocketConnectAsync will now pass back the client's WebSocket handle in the XAsync result rather than an internal handle
Short lived libHC instances were deadlocking because of insufficient checks upon entering condvar wait.
If large number of threads created, then threads could start execution with `m_terminate` being set to true already.
Waiting for on `m_wake` condvar without prior check of `m_terminate` causes deadlock
* Log a bunch of network diagnostic info on Android for network failure cases
* Add not suspended check if available
* Add a null check
* Move network details getting to a NetworkObserver class
* Some tweaks
* Some cosmetic tweaks
* Use StringBuilder.length instead of specific boolean
* Reintroducing websocket close behavior for Request_Error scenario
* Reintroducing websocket close behavior for Request_Error scenario
* Reintroducing websocket close behavior for Request_Error scenario
This syncs several fixes from the Windows OS repo.
Some edge case race conditions in queue signaling.
Fixing a use after free bug.
Hardening the task queue handles so closing a handle twice will fail instead of corrupting the queue.
A suspend / resume API that is only enabled and used on Xbox.
When WebSocket message fragments are received, they are collected in a buffer within LHC. They are only forwarded to the client when that buffer is full or when the final fragment has been received. When WinHttp has given us fragments, LHC incorrectly invokes the HCWebSocketBinaryMessageFragmentFunction even if we were able to accumulate the fragments in our buffer before passing them along to the client.
This also fixes a bug where the WebSocket's default m_maxReceiveBufferSize isn't correctly initialized to 20kb as documented in the header.
During cleaning, CurlProvider loops over its CurlMultis and kicks off CurlMulti::CleanupAsync for each of them. When the last of those sub-tasks completes, the Provider will be freed and then the CurlProvider::CleanupAsync task will be completed. There is a race condition such that the final CurlMulti cleanup may complete _before_ CurlProvider::CleanupAsyncProvider's XAsyncOp::Begin. This race exposes a couple of nasty bugs:
* Due to a tricky detail of how c++ move semantics work, ownership of the CurlMulti isn't properly transferred to CurlMulti::CleanupAsync, leading to the CurlMulti object (and its curl_multi handle) being cleaned up twice: once as part of CurlMulti::CleanupAsync (expected), and then a second time when the final MultiCleanupComplete fires while the CurlMulti object is still in the CurlProvider's m_curlMultis map (unexpected). The fix is to properly transfer ownership of the CurlMulti by passing it by value rather than r-value reference.
* Because CurlProvider is looping over its member m_curlMultis, we need to ensure that the provider isn't destroyed before that loop completes. The fix for this is to gate cleanup of the CurlProvider on not only the CurlMulti cleanup operations completing, but also on CurlProvider::CleanupAsyncProvider having exited that cleanup loop.
I also fixed a couple of other spots where ownership of unique_ptrs wasn't properly transferred, even though they didn't directly lead to bugs.
* WebSocket class reworked allowing multiple "observers" to independently register for events. This allows both clients and HC_PERFORM_ENV to keep track of the connection state
* HCWebSocketConnectAsync Shim added to HC_PERFORM_ENV, tracking WebSocket connect attempts
* During cleanup, WebSockets in the process of connecting will be allowed to finish connecting (currently no support for cancellation)
* Connected WebSockets will be properly closed
* Validated E2E using local WebSocket echo server + API runner scripts (added in separate PR)
* Regression tested via API runner
* Adds HCHttpPerformAsync shim to HC_PERFORM_ENV to track ongoing HTTP requests
* Adds partial support for HTTP cancelation
* Adds logic to HCCleanupAsync to cancel and await ongoing Http requests during cleanup
* Added relevant UnitTests
Remaining work:
* Tracking active WebSocket connections & terminating during HCCleanup
* Fixes bug where messages that exceeded the hardcoded receive buffer size were never delivered to clients
* Added public API to set a message fragment handler. When messages are broken down at the platform level and exceed the configured receive buffer size, they will be passed to clients in fragments.
* Adds public API to configure maximum WebSocket receive buffer size. When receive buffer is full, partial messages are passed to client.
* Adds public API to configure the maximum receive buffer size (WinHttp only). If large messages are expected, this can be changed so that full messages are delivered all at once rather than in chunks. Setting this buffer size to large values does mean that the entire message will be stored in LHC memory until it is passed along to clients so memory performance may be affected.
* Fixes deadlock that occurs if callers use one single threaded manually dispatched queue for both HTTP calls and HCCleanupAsync
* Layered such that other HTTP providers can support async cleanup in the future