* Harden logic in CurlMulti::CleanupAsync to ensure that the CurlMulti::Perform XTaskQueue callback is never executed after the CurlMulti is destroyed. Based on inspection, I've made two separate fixes that address two possible ways this could happen:
1) Add synchronization between CurlMulti::CleanupAsync and CurlMulti::AddRequest. Right now, thread A could call CleanupAsync, grab m_mutex, see that m_easyRequests is empty, and thus proceed with cleanup immediately. At the same time, thread B could call AddRequest, which adds to m_easyRequests and schedules an asynchronous Perform callback. That Perform callback could then execute after the CurlMulti is destroyed and cause a crash.
2) The only other place CurlMulti::Perform is scheduled is in Perform itself - if the call to curl_multi_perform indicates there are still running handles, it will reschedule another Perform callback. Before doing so however, it processes Curl messages with curl_multi_info_read and removes from m_easyHandles any requests that have completed. I haven't confirmed this is possible, but if curl_multi_perform indicated that there were running handles while m_easyRequests were empty, CurlMulti::Cleanup could destroy the CurlMulti while there is still a pending Perform callback. To make this slightly more robust, I've changed CurlMulti::Cleanup to explicitly track and await pending Perform callbacks.
waittimer uses a global timerqueue object. But timerqueue's dtor destroys a thread, which has allocated resources. This can cause conflicts with other static / global destructors in the system. We should not be using "magic statics" to do complex things like freeing dynamic memory.
This changes the global timerqueue to a shared_ptr. In the destructor for waittimer, it checks the shared pointer usage count and is this is the last wait timer, it will free the global shared pointer. This causes the thread deallocation to happen when the last task queue is closed instead of happening post-main in crt cleanup.
Tested with a repro that exposed the magic static fragility (crash in custom global new operator). Verified all works and no leaks post change (thanks Sebastian Perez-Delgado).
Adding logic to response decompression. Checking to see if a received response has been encoding using gzip by examining Content-Encoding header (if response compression is set within HCCallHandle prior to sending a request and gzip compression is enabled). Response decompression has been tested within Win32-Http Sample.
* Adding ZLIB Source
* Building zlib inside libHttpClient main module
* Removing std=c++17 flag for C files
* Adding std=c++17 flag to Linux CMakelists.txt
---------
Co-authored-by: Raul Gomez Rodriguez <raulalbertog@microsoft.com>
The task queue has a narrow race: if a future callback is evaluated just when the task queue is terminated it may get skipped from termination processing. The result is that a call for the future won't be canceled immediately when the task queue terminates. Instead it will be canceled when its due time occurs.
There is a second possible race as well. If a new future callback is scheduled and the schedule code is interleaved with a terminate call on another thread, the same thing can occur.
This change fixes both cases. It also fixes up some incorrect macros in the test projects so they can build again.
When projects were refactored to support building shared library versions of LHC, the import of libHttpClient.props was removed from many of the libHttpClient projects as it incorrectly pulls in project references to the static lib projects and causes linker errors. That props file historically served a weird dual purpose: its used by clients of LHC to add references to libHttpClient projects, but it was also imported by libHttpClient projects, defining several customizable properties used to build the libHttpClient lib (see readme for details). While most of libHttpClient.props is no longer needed by libHttpClient lib projects, we still should be pulling in the client's hc_settings.props customizations.
This PR does two things: 1) update libHttpClient.Common.vcxitems to pull in hc_settings.props and add any build customizations it specifies, and 2) remove libHttpClient.props from the remaining libHttpClient lib projects since several were missed previously.
* Use less `#if` to handle `HC_NOZLIB` also fix `HC_NOWEBSOCKET`
* Cleanup handling of include overrides (win32)
* remove HC_PLATFORM_GRTS and fix android build
* CR feedback
* Change LHC to be a shared library on Linux, exposing only methods specified in libHttpClientExports.txt
* Updates lib name and output paths to be consistent with other platforms
* Fix some stray allocations that weren't properly using the custom allocators
* Fix .sln build configuration so that GDK projects aren't set to build when Platform=x86/x64 and vice versa
WebSocket subprotocol parameter is ignored in WinHttp WebSocket stack. If specified, this should set the "Sec-WebSocket-Protocol" header during the connect.
* Add project files for libHttpClient.GDK.dll and libHttpClient.Win32.dll
* Add props file libHttpClient.import.props which adds references to the appropriate lhc projects. By default, it will add a reference to the dll project, but this can be overridden by specifying the build property HCStaticLib=true prior to importing the props file. The existing libHttpClient.props file will continue to add references to the old static lib projects as to not break existing clients.
Upon receiving a suspend event, the WinHttp provider will automatically tear down any active WebSocket connections. Depending on the current state of the connection, the suspend handler will either call WinHttpWebSocketClose followed by WinHttpCloseHandle, or just call WinHttpCloseHandle directly. If we receive a WinHttp disconnect callback at the same time, the disconnect handler will call WinHttpCloseHandle. This leads to race condition between the two handlers:
If the suspend handler runs first and determines that it needs to call WinHttpWebSocketClose (because at that point we believe the WebSocket is still connected) and then the disconnect handler runs and calls WinHttpCloseHandle BEFORE the suspend handler actually calls WinHttpWebSocketClose, the WinHttp handle may be in an invalid state when WebSocketClose finally gets called, leading to a crash.
The fix here involves a couple things. First, I updated the suspend handler to forego the call to WinHttpWebSocketClose altogether. This means that during suspend the WebSocket won't be torn down gracefully, but it greatly simplifies the LHC teardown process. Second, there is some additional logic needed to make sure that LHC still raises a disconnected event to clients during suspend, as the flow will change slightly with the removal of the WinHttpWebSocketClose call.
This change also contains a fix for a debug assert due to buffer size in FormatTrace