1289 строки
46 KiB
Plaintext
1289 строки
46 KiB
Plaintext
_ _ ____ _
|
|
___| | | | _ \| |
|
|
/ __| | | | |_) | |
|
|
| (__| |_| | _ <| |___
|
|
\___|\___/|_| \_\_____|
|
|
|
|
Things that could be nice to do in the future
|
|
|
|
Things to do in project curl. Please tell us what you think, contribute and
|
|
send us patches that improve things!
|
|
|
|
Be aware that these are things that we could do, or have once been considered
|
|
things we could do. If you want to work on any of these areas, please
|
|
consider bringing it up for discussions first on the mailing list so that we
|
|
all agree it is still a good idea for the project!
|
|
|
|
All bugs documented in the KNOWN_BUGS document are subject for fixing!
|
|
|
|
1. libcurl
|
|
1.1 TFO support on Windows
|
|
1.2 Consult %APPDATA% also for .netrc
|
|
1.3 struct lifreq
|
|
1.4 alt-svc sharing
|
|
1.5 get rid of PATH_MAX
|
|
1.6 native IDN support on macOS
|
|
1.7 Support HTTP/2 for HTTP(S) proxies
|
|
1.8 CURLOPT_RESOLVE for any port number
|
|
1.9 Cache negative name resolves
|
|
1.10 auto-detect proxy
|
|
1.11 minimize dependencies with dynamically loaded modules
|
|
1.12 updated DNS server while running
|
|
1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION
|
|
1.14 Typesafe curl_easy_setopt()
|
|
1.15 Monitor connections in the connection pool
|
|
1.16 Try to URL encode given URL
|
|
1.17 Add support for IRIs
|
|
1.18 try next proxy if one doesn't work
|
|
1.19 provide timing info for each redirect
|
|
1.20 SRV and URI DNS records
|
|
1.21 netrc caching and sharing
|
|
1.22 CURLINFO_PAUSE_STATE
|
|
1.23 Offer API to flush the connection pool
|
|
1.24 TCP Fast Open for windows
|
|
1.25 Expose tried IP addresses that failed
|
|
1.27 hardcode the "localhost" addresses
|
|
1.28 FD_CLOEXEC
|
|
1.29 Upgrade to websockets
|
|
1.30 config file parsing
|
|
|
|
2. libcurl - multi interface
|
|
2.1 More non-blocking
|
|
2.2 Better support for same name resolves
|
|
2.3 Non-blocking curl_multi_remove_handle()
|
|
2.4 Split connect and authentication process
|
|
2.5 Edge-triggered sockets should work
|
|
2.6 multi upkeep
|
|
2.7 Virtual external sockets
|
|
2.8 dynamically decide to use socketpair
|
|
|
|
3. Documentation
|
|
3.2 Provide cmake config-file
|
|
|
|
4. FTP
|
|
4.1 HOST
|
|
4.2 Alter passive/active on failure and retry
|
|
4.3 Earlier bad letter detection
|
|
4.5 ASCII support
|
|
4.6 GSSAPI via Windows SSPI
|
|
4.7 STAT for LIST without data connection
|
|
4.8 Option to ignore private IP addresses in PASV response
|
|
|
|
5. HTTP
|
|
5.1 Better persistency for HTTP 1.0
|
|
5.2 Set custom client ip when using haproxy protocol
|
|
5.3 Rearrange request header order
|
|
5.4 Allow SAN names in HTTP/2 server push
|
|
5.5 auth= in URLs
|
|
5.6 alt-svc should fallback if alt-svc doesn't work
|
|
|
|
6. TELNET
|
|
6.1 ditch stdin
|
|
6.2 ditch telnet-specific select
|
|
6.3 feature negotiation debug data
|
|
|
|
7. SMTP
|
|
7.2 Enhanced capability support
|
|
7.3 Add CURLOPT_MAIL_CLIENT option
|
|
|
|
8. POP3
|
|
8.2 Enhanced capability support
|
|
|
|
9. IMAP
|
|
9.1 Enhanced capability support
|
|
|
|
10. LDAP
|
|
10.1 SASL based authentication mechanisms
|
|
10.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS
|
|
10.3 Paged searches on LDAP server
|
|
|
|
11. SMB
|
|
11.1 File listing support
|
|
11.2 Honor file timestamps
|
|
11.3 Use NTLMv2
|
|
11.4 Create remote directories
|
|
|
|
12. FILE
|
|
12.1 Directory listing for FILE:
|
|
|
|
13. SSL
|
|
13.1 TLS-PSK with OpenSSL
|
|
13.2 Provide mutex locking API
|
|
13.4 Cache/share OpenSSL contexts
|
|
13.5 Export session ids
|
|
13.6 Provide callback for cert verification
|
|
13.8 Support DANE
|
|
13.9 TLS record padding
|
|
13.10 Support Authority Information Access certificate extension (AIA)
|
|
13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
|
|
13.13 Make sure we forbid TLS 1.3 post-handshake authentication
|
|
13.14 Support the clienthello extension
|
|
13.15 Support mbedTLS 3.0
|
|
|
|
14. GnuTLS
|
|
14.2 check connection
|
|
|
|
15. Schannel
|
|
15.1 Extend support for client certificate authentication
|
|
15.2 Extend support for the --ciphers option
|
|
15.4 Add option to allow abrupt server closure
|
|
|
|
16. SASL
|
|
16.1 Other authentication mechanisms
|
|
16.2 Add QOP support to GSSAPI authentication
|
|
16.3 Support binary messages (i.e.: non-base64)
|
|
|
|
17. SSH protocols
|
|
17.1 Multiplexing
|
|
17.2 Handle growing SFTP files
|
|
17.3 Support better than MD5 hostkey hash
|
|
17.4 Support CURLOPT_PREQUOTE
|
|
17.5 SSH over HTTPS proxy with more backends
|
|
|
|
18. Command line tool
|
|
18.1 sync
|
|
18.2 glob posts
|
|
18.3 prevent file overwriting
|
|
18.4 --proxycommand
|
|
18.5 UTF-8 filenames in Content-Disposition
|
|
18.6 Option to make -Z merge lined based outputs on stdout
|
|
18.7 at least N milliseconds between requests
|
|
18.8 Consider convenience options for JSON and XML?
|
|
18.9 Choose the name of file in braces for complex URLs
|
|
18.10 improve how curl works in a windows console window
|
|
18.11 Windows: set attribute 'archive' for completed downloads
|
|
18.12 keep running, read instructions from pipe/socket
|
|
18.13 Ratelimit or wait between serial requests
|
|
18.14 --dry-run
|
|
18.15 --retry should resume
|
|
18.16 send only part of --data
|
|
18.17 consider file name from the redirected URL with -O ?
|
|
18.18 retry on network is unreachable
|
|
18.19 expand ~/ in config files
|
|
18.20 host name sections in config files
|
|
18.21 retry on the redirected-to URL
|
|
18.23 Set the modification date on an uploaded file
|
|
18.24 Use multiple parallel transfers for a single download
|
|
18.25 Prevent terminal injection when writing to terminal
|
|
18.26 Custom progress meter update interval
|
|
|
|
19. Build
|
|
19.1 roffit
|
|
19.2 Enable PIE and RELRO by default
|
|
19.3 Don't use GNU libtool on OpenBSD
|
|
19.4 Package curl for Windows in a signed installer
|
|
|
|
20. Test suite
|
|
20.1 SSL tunnel
|
|
20.2 nicer lacking perl message
|
|
20.3 more protocols supported
|
|
20.4 more platforms supported
|
|
20.5 Add support for concurrent connections
|
|
20.6 Use the RFC6265 test suite
|
|
20.7 Support LD_PRELOAD on macOS
|
|
20.8 Run web-platform-tests url tests
|
|
20.9 Bring back libssh tests on Travis
|
|
|
|
21. MQTT
|
|
21.1 Support rate-limiting
|
|
|
|
==============================================================================
|
|
|
|
1. libcurl
|
|
|
|
1.1 TFO support on Windows
|
|
|
|
TCP Fast Open is supported on several platforms but not on Windows. Work on
|
|
this was once started but never finished.
|
|
|
|
See https://github.com/curl/curl/pull/3378
|
|
|
|
1.2 Consult %APPDATA% also for .netrc
|
|
|
|
%APPDATA%\.netrc is not considered when running on Windows. Shouldn't it?
|
|
|
|
See https://github.com/curl/curl/issues/4016
|
|
|
|
1.3 struct lifreq
|
|
|
|
Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
|
|
SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
|
|
To support IPv6 interface addresses for network interfaces properly.
|
|
|
|
1.4 alt-svc sharing
|
|
|
|
The share interface could benefit from allowing the alt-svc cache to be
|
|
possible to share between easy handles.
|
|
|
|
See https://github.com/curl/curl/issues/4476
|
|
|
|
1.5 get rid of PATH_MAX
|
|
|
|
Having code use and rely on PATH_MAX is not nice:
|
|
https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
|
|
|
|
Currently the libssh2 SSH based code uses it, but to remove PATH_MAX from
|
|
there we need libssh2 to properly tell us when we pass in a too small buffer
|
|
and its current API (as of libssh2 1.2.7) doesn't.
|
|
|
|
1.6 native IDN support on macOS
|
|
|
|
On recent macOS versions, the getaddrinfo() function itself has built-in IDN
|
|
support. By setting the AI_CANONNAME flag, the function will return the
|
|
encoded name in the ai_canonname struct field in the returned information.
|
|
This could be used by curl on macOS when built without a separate IDN library
|
|
and an IDN host name is used in a URL.
|
|
|
|
See initial work in https://github.com/curl/curl/pull/5371
|
|
|
|
1.7 Support HTTP/2 for HTTP(S) proxies
|
|
|
|
Support for doing HTTP/2 to HTTP and HTTPS proxies is still missing.
|
|
|
|
See https://github.com/curl/curl/issues/3570
|
|
|
|
1.8 CURLOPT_RESOLVE for any port number
|
|
|
|
This option allows applications to set a replacement IP address for a given
|
|
host + port pair. Consider making support for providing a replacement address
|
|
for the host name on all port numbers.
|
|
|
|
See https://github.com/curl/curl/issues/1264
|
|
|
|
1.9 Cache negative name resolves
|
|
|
|
A name resolve that has failed is likely to fail when made again within a
|
|
short period of time. Currently we only cache positive responses.
|
|
|
|
1.10 auto-detect proxy
|
|
|
|
libcurl could be made to detect the system proxy setup automatically and use
|
|
that. On Windows, macOS and Linux desktops for example.
|
|
|
|
The pull-request to use libproxy for this was deferred due to doubts on the
|
|
reliability of the dependency and how to use it:
|
|
https://github.com/curl/curl/pull/977
|
|
|
|
libdetectproxy is a (C++) library for detecting the proxy on Windows
|
|
https://github.com/paulharris/libdetectproxy
|
|
|
|
1.11 minimize dependencies with dynamically loaded modules
|
|
|
|
We can create a system with loadable modules/plug-ins, where these modules
|
|
would be the ones that link to 3rd party libs. That would allow us to avoid
|
|
having to load ALL dependencies since only the necessary ones for this
|
|
app/invoke/used protocols would be necessary to load. See
|
|
https://github.com/curl/curl/issues/349
|
|
|
|
1.12 updated DNS server while running
|
|
|
|
If /etc/resolv.conf gets updated while a program using libcurl is running, it
|
|
is may cause name resolves to fail unless res_init() is called. We should
|
|
consider calling res_init() + retry once unconditionally on all name resolve
|
|
failures to mitigate against this. Firefox works like that. Note that Windows
|
|
doesn't have res_init() or an alternative.
|
|
|
|
https://github.com/curl/curl/issues/2251
|
|
|
|
1.13 c-ares and CURLOPT_OPENSOCKETFUNCTION
|
|
|
|
curl will create most sockets via the CURLOPT_OPENSOCKETFUNCTION callback and
|
|
close them with the CURLOPT_CLOSESOCKETFUNCTION callback. However, c-ares
|
|
does not use those functions and instead opens and closes the sockets
|
|
itself. This means that when curl passes the c-ares socket to the
|
|
CURLMOPT_SOCKETFUNCTION it isn't owned by the application like other sockets.
|
|
|
|
See https://github.com/curl/curl/issues/2734
|
|
|
|
1.14 Typesafe curl_easy_setopt()
|
|
|
|
One of the most common problems in libcurl using applications is the lack of
|
|
type checks for curl_easy_setopt() which happens because it accepts varargs
|
|
and thus can take any type.
|
|
|
|
One possible solution to this is to introduce a few different versions of the
|
|
setopt version for the different kinds of data you can set.
|
|
|
|
curl_easy_set_num() - sets a long value
|
|
|
|
curl_easy_set_large() - sets a curl_off_t value
|
|
|
|
curl_easy_set_ptr() - sets a pointer
|
|
|
|
curl_easy_set_cb() - sets a callback PLUS its callback data
|
|
|
|
1.15 Monitor connections in the connection pool
|
|
|
|
libcurl's connection cache or pool holds a number of open connections for the
|
|
purpose of possible subsequent connection reuse. It may contain a few up to a
|
|
significant amount of connections. Currently, libcurl leaves all connections
|
|
as they are and first when a connection is iterated over for matching or
|
|
reuse purpose it is verified that it is still alive.
|
|
|
|
Those connections may get closed by the server side for idleness or they may
|
|
get a HTTP/2 ping from the peer to verify that they're still alive. By adding
|
|
monitoring of the connections while in the pool, libcurl can detect dead
|
|
connections (and close them) better and earlier, and it can handle HTTP/2
|
|
pings to keep such ones alive even when not actively doing transfers on them.
|
|
|
|
1.16 Try to URL encode given URL
|
|
|
|
Given a URL that for example contains spaces, libcurl could have an option
|
|
that would try somewhat harder than it does now and convert spaces to %20 and
|
|
perhaps URL encoded byte values over 128 etc (basically do what the redirect
|
|
following code already does).
|
|
|
|
https://github.com/curl/curl/issues/514
|
|
|
|
1.17 Add support for IRIs
|
|
|
|
IRIs (RFC 3987) allow localized, non-ascii, names in the URL. To properly
|
|
support this, curl/libcurl would need to translate/encode the given input
|
|
from the input string encoding into percent encoded output "over the wire".
|
|
|
|
To make that work smoothly for curl users even on Windows, curl would
|
|
probably need to be able to convert from several input encodings.
|
|
|
|
1.18 try next proxy if one doesn't work
|
|
|
|
Allow an application to specify a list of proxies to try, and failing to
|
|
connect to the first go on and try the next instead until the list is
|
|
exhausted. Browsers support this feature at least when they specify proxies
|
|
using PACs.
|
|
|
|
https://github.com/curl/curl/issues/896
|
|
|
|
1.19 provide timing info for each redirect
|
|
|
|
curl and libcurl provide timing information via a set of different
|
|
time-stamps (CURLINFO_*_TIME). When curl is following redirects, those
|
|
returned time value are the accumulated sums. An improvement could be to
|
|
offer separate timings for each redirect.
|
|
|
|
https://github.com/curl/curl/issues/6743
|
|
|
|
1.20 SRV and URI DNS records
|
|
|
|
Offer support for resolving SRV and URI DNS records for libcurl to know which
|
|
server to connect to for various protocols (including HTTP!).
|
|
|
|
1.21 netrc caching and sharing
|
|
|
|
The netrc file is read and parsed each time a connection is setup, which
|
|
means that if a transfer needs multiple connections for authentication or
|
|
redirects, the file might be reread (and parsed) multiple times. This makes
|
|
it impossible to provide the file as a pipe.
|
|
|
|
1.22 CURLINFO_PAUSE_STATE
|
|
|
|
Return information about the transfer's current pause state, in both
|
|
directions. https://github.com/curl/curl/issues/2588
|
|
|
|
1.23 Offer API to flush the connection pool
|
|
|
|
Sometimes applications want to flush all the existing connections kept alive.
|
|
An API could allow a forced flush or just a forced loop that would properly
|
|
close all connections that have been closed by the server already.
|
|
|
|
1.24 TCP Fast Open for windows
|
|
|
|
libcurl supports the CURLOPT_TCP_FASTOPEN option since 7.49.0 for Linux and
|
|
Mac OS. Windows supports TCP Fast Open starting with Windows 10, version 1607
|
|
and we should add support for it.
|
|
|
|
1.25 Expose tried IP addresses that failed
|
|
|
|
When libcurl fails to connect to a host, it should be able to offer the
|
|
application the list of IP addresses that were used in the attempt.
|
|
|
|
https://github.com/curl/curl/issues/2126
|
|
|
|
1.27 hardcode the "localhost" addresses
|
|
|
|
There's this new spec getting adopted that says "localhost" should always and
|
|
unconditionally be a local address and not get resolved by a DNS server. A
|
|
fine way for curl to fix this would be to simply hard-code the response to
|
|
127.0.0.1 and/or ::1 (depending on what IP versions that are requested). This
|
|
is what the browsers probably will do with this hostname.
|
|
|
|
https://bugzilla.mozilla.org/show_bug.cgi?id=1220810
|
|
|
|
https://tools.ietf.org/html/draft-ietf-dnsop-let-localhost-be-localhost-02
|
|
|
|
1.28 FD_CLOEXEC
|
|
|
|
It sets the close-on-exec flag for the file descriptor, which causes the file
|
|
descriptor to be automatically (and atomically) closed when any of the
|
|
exec-family functions succeed. Should probably be set by default?
|
|
|
|
https://github.com/curl/curl/issues/2252
|
|
|
|
1.29 Upgrade to websockets
|
|
|
|
libcurl could offer a smoother path to get to a websocket connection.
|
|
See https://github.com/curl/curl/issues/3523
|
|
|
|
Michael Kaufmann suggestion here:
|
|
https://curl.se/video/curlup-2017/2017-03-19_05_Michael_Kaufmann_Websocket_support_for_curl.mp4
|
|
|
|
1.30 config file parsing
|
|
|
|
Consider providing an API, possibly in a separate companion library, for
|
|
parsing a config file like curl's -K/--config option to allow applications to
|
|
get the same ability to read curl options from files.
|
|
|
|
See https://github.com/curl/curl/issues/3698
|
|
|
|
2. libcurl - multi interface
|
|
|
|
2.1 More non-blocking
|
|
|
|
Make sure we don't ever loop because of non-blocking sockets returning
|
|
EWOULDBLOCK or similar. Blocking cases include:
|
|
|
|
- Name resolves on non-windows unless c-ares or the threaded resolver is used.
|
|
|
|
- The threaded resolver may block on cleanup:
|
|
https://github.com/curl/curl/issues/4852
|
|
|
|
- file:// transfers
|
|
|
|
- TELNET transfers
|
|
|
|
- GSSAPI authentication for FTP transfers
|
|
|
|
- The "DONE" operation (post transfer protocol-specific actions) for the
|
|
protocols SFTP, SMTP, FTP. Fixing multi_done() for this is a worthy task.
|
|
|
|
- curl_multi_remove_handle for any of the above. See section 2.3.
|
|
|
|
2.2 Better support for same name resolves
|
|
|
|
If a name resolve has been initiated for name NN and a second easy handle
|
|
wants to resolve that name as well, make it wait for the first resolve to end
|
|
up in the cache instead of doing a second separate resolve. This is
|
|
especially needed when adding many simultaneous handles using the same host
|
|
name when the DNS resolver can get flooded.
|
|
|
|
2.3 Non-blocking curl_multi_remove_handle()
|
|
|
|
The multi interface has a few API calls that assume a blocking behavior, like
|
|
add_handle() and remove_handle() which limits what we can do internally. The
|
|
multi API need to be moved even more into a single function that "drives"
|
|
everything in a non-blocking manner and signals when something is done. A
|
|
remove or add would then only ask for the action to get started and then
|
|
multi_perform() etc still be called until the add/remove is completed.
|
|
|
|
2.4 Split connect and authentication process
|
|
|
|
The multi interface treats the authentication process as part of the connect
|
|
phase. As such any failures during authentication won't trigger the relevant
|
|
QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
|
|
|
|
2.5 Edge-triggered sockets should work
|
|
|
|
The multi_socket API should work with edge-triggered socket events. One of
|
|
the internal actions that need to be improved for this to work perfectly is
|
|
the 'maxloops' handling in transfer.c:readwrite_data().
|
|
|
|
2.6 multi upkeep
|
|
|
|
In libcurl 7.62.0 we introduced curl_easy_upkeep. It unfortunately only works
|
|
on easy handles. We should introduces a version of that for the multi handle,
|
|
and also consider doing "upkeep" automatically on connections in the
|
|
connection pool when the multi handle is in used.
|
|
|
|
See https://github.com/curl/curl/issues/3199
|
|
|
|
2.7 Virtual external sockets
|
|
|
|
libcurl performs operations on the given file descriptor that presumes it is
|
|
a socket and an application cannot replace them at the moment. Allowing an
|
|
application to fully replace those would allow a larger degree of freedom and
|
|
flexibility.
|
|
|
|
See https://github.com/curl/curl/issues/5835
|
|
|
|
2.8 dynamically decide to use socketpair
|
|
|
|
For users who don't use curl_multi_wait() or don't care for
|
|
curl_multi_wakeup(), we could introduce a way to make libcurl NOT
|
|
create a socketpair in the multi handle.
|
|
|
|
See https://github.com/curl/curl/issues/4829
|
|
|
|
3. Documentation
|
|
|
|
3.2 Provide cmake config-file
|
|
|
|
A config-file package is a set of files provided by us to allow applications
|
|
to write cmake scripts to find and use libcurl easier. See
|
|
https://github.com/curl/curl/issues/885
|
|
|
|
4. FTP
|
|
|
|
4.1 HOST
|
|
|
|
HOST is a command for a client to tell which host name to use, to offer FTP
|
|
servers named-based virtual hosting:
|
|
|
|
https://tools.ietf.org/html/rfc7151
|
|
|
|
4.2 Alter passive/active on failure and retry
|
|
|
|
When trying to connect passively to a server which only supports active
|
|
connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
|
|
connection. There could be a way to fallback to an active connection (and
|
|
vice versa). https://curl.se/bug/feature.cgi?id=1754793
|
|
|
|
4.3 Earlier bad letter detection
|
|
|
|
Make the detection of (bad) %0d and %0a codes in FTP URL parts earlier in the
|
|
process to avoid doing a resolve and connect in vain.
|
|
|
|
4.5 ASCII support
|
|
|
|
FTP ASCII transfers do not follow RFC959. They don't convert the data
|
|
accordingly.
|
|
|
|
4.6 GSSAPI via Windows SSPI
|
|
|
|
In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5)
|
|
via third-party GSS-API libraries, such as Heimdal or MIT Kerberos, also add
|
|
support for GSSAPI authentication via Windows SSPI.
|
|
|
|
4.7 STAT for LIST without data connection
|
|
|
|
Some FTP servers allow STAT for listing directories instead of using LIST,
|
|
and the response is then sent over the control connection instead of as the
|
|
otherwise usedw data connection: https://www.nsftools.com/tips/RawFTP.htm#STAT
|
|
|
|
This is not detailed in any FTP specification.
|
|
|
|
4.8 Option to ignore private IP addresses in PASV response
|
|
|
|
Some servers respond with and some other FTP client implementations can
|
|
ignore private (RFC 1918 style) IP addresses when received in PASV responses.
|
|
To consider for libcurl as well. See https://github.com/curl/curl/issues/1455
|
|
|
|
5. HTTP
|
|
|
|
5.1 Better persistency for HTTP 1.0
|
|
|
|
"Better" support for persistent connections over HTTP 1.0
|
|
https://curl.se/bug/feature.cgi?id=1089001
|
|
|
|
5.2 Set custom client ip when using haproxy protocol
|
|
|
|
This would allow testing servers with different client ip addresses (without
|
|
using x-forward-for header).
|
|
|
|
https://github.com/curl/curl/issues/5125
|
|
|
|
5.3 Rearrange request header order
|
|
|
|
Server implementors often make an effort to detect browser and to reject
|
|
clients it can detect to not match. One of the last details we cannot yet
|
|
control in libcurl's HTTP requests, which also can be exploited to detect
|
|
that libcurl is in fact used even when it tries to impersonate a browser, is
|
|
the order of the request headers. I propose that we introduce a new option in
|
|
which you give headers a value, and then when the HTTP request is built it
|
|
sorts the headers based on that number. We could then have internally created
|
|
headers use a default value so only headers that need to be moved have to be
|
|
specified.
|
|
|
|
5.4 Allow SAN names in HTTP/2 server push
|
|
|
|
curl only allows HTTP/2 push promise if the provided :authority header value
|
|
exactly matches the host name given in the URL. It could be extended to allow
|
|
any name that would match the Subject Alternative Names in the server's TLS
|
|
certificate.
|
|
|
|
See https://github.com/curl/curl/pull/3581
|
|
|
|
5.5 auth= in URLs
|
|
|
|
Add the ability to specify the preferred authentication mechanism to use by
|
|
using ;auth=<mech> in the login part of the URL.
|
|
|
|
For example:
|
|
|
|
http://test:pass;auth=NTLM@example.com would be equivalent to specifying
|
|
--user test:pass;auth=NTLM or --user test:pass --ntlm from the command line.
|
|
|
|
Additionally this should be implemented for proxy base URLs as well.
|
|
|
|
5.6 alt-svc should fallback if alt-svc doesn't work
|
|
|
|
The alt-svc: header provides a set of alternative services for curl to use
|
|
instead of the original. If the first attempted one fails, it should try the
|
|
next etc and if all alternatives fail go back to the original.
|
|
|
|
See https://github.com/curl/curl/issues/4908
|
|
|
|
6. TELNET
|
|
|
|
6.1 ditch stdin
|
|
|
|
Reading input (to send to the remote server) on stdin is a crappy solution
|
|
for library purposes. We need to invent a good way for the application to be
|
|
able to provide the data to send.
|
|
|
|
6.2 ditch telnet-specific select
|
|
|
|
Move the telnet support's network select() loop go away and merge the code
|
|
into the main transfer loop. Until this is done, the multi interface won't
|
|
work for telnet.
|
|
|
|
6.3 feature negotiation debug data
|
|
|
|
Add telnet feature negotiation data to the debug callback as header data.
|
|
|
|
|
|
7. SMTP
|
|
|
|
7.2 Enhanced capability support
|
|
|
|
Add the ability, for an application that uses libcurl, to obtain the list of
|
|
capabilities returned from the EHLO command.
|
|
|
|
7.3 Add CURLOPT_MAIL_CLIENT option
|
|
|
|
Rather than use the URL to specify the mail client string to present in the
|
|
HELO and EHLO commands, libcurl should support a new CURLOPT specifically for
|
|
specifying this data as the URL is non-standard and to be honest a bit of a
|
|
hack ;-)
|
|
|
|
Please see the following thread for more information:
|
|
https://curl.se/mail/lib-2012-05/0178.html
|
|
|
|
|
|
8. POP3
|
|
|
|
8.2 Enhanced capability support
|
|
|
|
Add the ability, for an application that uses libcurl, to obtain the list of
|
|
capabilities returned from the CAPA command.
|
|
|
|
9. IMAP
|
|
|
|
9.1 Enhanced capability support
|
|
|
|
Add the ability, for an application that uses libcurl, to obtain the list of
|
|
capabilities returned from the CAPABILITY command.
|
|
|
|
10. LDAP
|
|
|
|
10.1 SASL based authentication mechanisms
|
|
|
|
Currently the LDAP module only supports ldap_simple_bind_s() in order to bind
|
|
to an LDAP server. However, this function sends username and password details
|
|
using the simple authentication mechanism (as clear text). However, it should
|
|
be possible to use ldap_bind_s() instead specifying the security context
|
|
information ourselves.
|
|
|
|
10.2 CURLOPT_SSL_CTX_FUNCTION for LDAPS
|
|
|
|
CURLOPT_SSL_CTX_FUNCTION works perfectly for HTTPS and email protocols, but
|
|
it has no effect for LDAPS connections.
|
|
|
|
https://github.com/curl/curl/issues/4108
|
|
|
|
10.3 Paged searches on LDAP server
|
|
|
|
https://github.com/curl/curl/issues/4452
|
|
|
|
11. SMB
|
|
|
|
11.1 File listing support
|
|
|
|
Add support for listing the contents of a SMB share. The output should
|
|
probably be the same as/similar to FTP.
|
|
|
|
11.2 Honor file timestamps
|
|
|
|
The timestamp of the transferred file should reflect that of the original
|
|
file.
|
|
|
|
11.3 Use NTLMv2
|
|
|
|
Currently the SMB authentication uses NTLMv1.
|
|
|
|
11.4 Create remote directories
|
|
|
|
Support for creating remote directories when uploading a file to a directory
|
|
that doesn't exist on the server, just like --ftp-create-dirs.
|
|
|
|
|
|
12. FILE
|
|
|
|
12.1 Directory listing for FILE:
|
|
|
|
Add support for listing the contents of a directory accessed with FILE. The
|
|
output should probably be the same as/similar to FTP.
|
|
|
|
|
|
13. SSL
|
|
|
|
13.1 TLS-PSK with OpenSSL
|
|
|
|
Transport Layer Security pre-shared key ciphersuites (TLS-PSK) is a set of
|
|
cryptographic protocols that provide secure communication based on pre-shared
|
|
keys (PSKs). These pre-shared keys are symmetric keys shared in advance among
|
|
the communicating parties.
|
|
|
|
https://github.com/curl/curl/issues/5081
|
|
|
|
13.2 Provide mutex locking API
|
|
|
|
Provide a libcurl API for setting mutex callbacks in the underlying SSL
|
|
library, so that the same application code can use mutex-locking
|
|
independently of OpenSSL or GnutTLS being used.
|
|
|
|
13.4 Cache/share OpenSSL contexts
|
|
|
|
"Look at SSL cafile - quick traces look to me like these are done on every
|
|
request as well, when they should only be necessary once per SSL context (or
|
|
once per handle)". The major improvement we can rather easily do is to make
|
|
sure we don't create and kill a new SSL "context" for every request, but
|
|
instead make one for every connection and re-use that SSL context in the same
|
|
style connections are re-used. It will make us use slightly more memory but
|
|
it will libcurl do less creations and deletions of SSL contexts.
|
|
|
|
Technically, the "caching" is probably best implemented by getting added to
|
|
the share interface so that easy handles who want to and can reuse the
|
|
context specify that by sharing with the right properties set.
|
|
|
|
https://github.com/curl/curl/issues/1110
|
|
|
|
13.5 Export session ids
|
|
|
|
Add an interface to libcurl that enables "session IDs" to get
|
|
exported/imported. Cris Bailiff said: "OpenSSL has functions which can
|
|
serialise the current SSL state to a buffer of your choice, and recover/reset
|
|
the state from such a buffer at a later date - this is used by mod_ssl for
|
|
apache to implement and SSL session ID cache".
|
|
|
|
13.6 Provide callback for cert verification
|
|
|
|
OpenSSL supports a callback for customised verification of the peer
|
|
certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
|
|
it be? There's so much that could be done if it were!
|
|
|
|
13.8 Support DANE
|
|
|
|
DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL
|
|
keys and certs over DNS using DNSSEC as an alternative to the CA model.
|
|
https://www.rfc-editor.org/rfc/rfc6698.txt
|
|
|
|
An initial patch was posted by Suresh Krishnaswamy on March 7th 2013
|
|
(https://curl.se/mail/lib-2013-03/0075.html) but it was a too simple
|
|
approach. See Daniel's comments:
|
|
https://curl.se/mail/lib-2013-03/0103.html . libunbound may be the
|
|
correct library to base this development on.
|
|
|
|
Björn Stenberg wrote a separate initial take on DANE that was never
|
|
completed.
|
|
|
|
13.9 TLS record padding
|
|
|
|
TLS (1.3) offers optional record padding and OpenSSL provides an API for it.
|
|
I could make sense for libcurl to offer this ability to applications to make
|
|
traffic patterns harder to figure out by network traffic observers.
|
|
|
|
See https://github.com/curl/curl/issues/5398
|
|
|
|
13.10 Support Authority Information Access certificate extension (AIA)
|
|
|
|
AIA can provide various things like CRLs but more importantly information
|
|
about intermediate CA certificates that can allow validation path to be
|
|
fulfilled when the HTTPS server doesn't itself provide them.
|
|
|
|
Since AIA is about downloading certs on demand to complete a TLS handshake,
|
|
it is probably a bit tricky to get done right.
|
|
|
|
See https://github.com/curl/curl/issues/2793
|
|
|
|
13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
|
|
|
|
CURLOPT_PINNEDPUBLICKEY does not consider the hashes of intermediate & root
|
|
certificates when comparing the pinned keys. Therefore it is not compatible
|
|
with "HTTP Public Key Pinning" as there also intermediate and root
|
|
certificates can be pinned. This is very useful as it prevents webadmins from
|
|
"locking themselves out of their servers".
|
|
|
|
Adding this feature would make curls pinning 100% compatible to HPKP and
|
|
allow more flexible pinning.
|
|
|
|
13.13 Make sure we forbid TLS 1.3 post-handshake authentication
|
|
|
|
RFC 8740 explains how using HTTP/2 must forbid the use of TLS 1.3
|
|
post-handshake authentication. We should make sure to live up to that.
|
|
|
|
See https://github.com/curl/curl/issues/5396
|
|
|
|
13.14 Support the clienthello extension
|
|
|
|
Certain stupid networks and middle boxes have a problem with SSL handshake
|
|
packets that are within a certain size range because how that sets some bits
|
|
that previously (in older TLS version) were not set. The clienthello
|
|
extension adds padding to avoid that size range.
|
|
|
|
https://tools.ietf.org/html/rfc7685
|
|
https://github.com/curl/curl/issues/2299
|
|
|
|
13.15 Support mbedTLS 3.0
|
|
|
|
Version 3.0 is not backwards compatible with pre-3.0 versions, and curl no
|
|
longer builds due to breaking changes in the API.
|
|
|
|
See https://github.com/curl/curl/issues/7385
|
|
|
|
14. GnuTLS
|
|
|
|
14.2 check connection
|
|
|
|
Add a way to check if the connection seems to be alive, to correspond to the
|
|
SSL_peak() way we use with OpenSSL.
|
|
|
|
15. Schannel
|
|
|
|
15.1 Extend support for client certificate authentication
|
|
|
|
The existing support for the -E/--cert and --key options could be
|
|
extended by supplying a custom certificate and key in PEM format, see:
|
|
- Getting a Certificate for Schannel
|
|
https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx
|
|
|
|
15.2 Extend support for the --ciphers option
|
|
|
|
The existing support for the --ciphers option could be extended
|
|
by mapping the OpenSSL/GnuTLS cipher suites to the Schannel APIs, see
|
|
- Specifying Schannel Ciphers and Cipher Strengths
|
|
https://msdn.microsoft.com/en-us/library/windows/desktop/aa380161.aspx
|
|
|
|
15.4 Add option to allow abrupt server closure
|
|
|
|
libcurl w/schannel will error without a known termination point from the
|
|
server (such as length of transfer, or SSL "close notify" alert) to prevent
|
|
against a truncation attack. Really old servers may neglect to send any
|
|
termination point. An option could be added to ignore such abrupt closures.
|
|
|
|
https://github.com/curl/curl/issues/4427
|
|
|
|
16. SASL
|
|
|
|
16.1 Other authentication mechanisms
|
|
|
|
Add support for other authentication mechanisms such as OLP,
|
|
GSS-SPNEGO and others.
|
|
|
|
16.2 Add QOP support to GSSAPI authentication
|
|
|
|
Currently the GSSAPI authentication only supports the default QOP of auth
|
|
(Authentication), whilst Kerberos V5 supports both auth-int (Authentication
|
|
with integrity protection) and auth-conf (Authentication with integrity and
|
|
privacy protection).
|
|
|
|
16.3 Support binary messages (i.e.: non-base64)
|
|
|
|
Mandatory to support LDAP SASL authentication.
|
|
|
|
|
|
17. SSH protocols
|
|
|
|
17.1 Multiplexing
|
|
|
|
SSH is a perfectly fine multiplexed protocols which would allow libcurl to do
|
|
multiple parallel transfers from the same host using the same connection,
|
|
much in the same spirit as HTTP/2 does. libcurl however does not take
|
|
advantage of that ability but will instead always create a new connection for
|
|
new transfers even if an existing connection already exists to the host.
|
|
|
|
To fix this, libcurl would have to detect an existing connection and "attach"
|
|
the new transfer to the existing one.
|
|
|
|
17.2 Handle growing SFTP files
|
|
|
|
The SFTP code in libcurl checks the file size *before* a transfer starts and
|
|
then proceeds to transfer exactly that amount of data. If the remote file
|
|
grows while the transfer is in progress libcurl won't notice and will not
|
|
adapt. The OpenSSH SFTP command line tool does and libcurl could also just
|
|
attempt to download more to see if there is more to get...
|
|
|
|
https://github.com/curl/curl/issues/4344
|
|
|
|
17.3 Support better than MD5 hostkey hash
|
|
|
|
libcurl offers the CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option for verifying the
|
|
server's key. MD5 is generally being deprecated so we should implement
|
|
support for stronger hashing algorithms. libssh2 itself is what provides this
|
|
underlying functionality and it supports at least SHA-1 as an alternative.
|
|
SHA-1 is also being deprecated these days so we should consider working with
|
|
libssh2 to instead offer support for SHA-256 or similar.
|
|
|
|
17.4 Support CURLOPT_PREQUOTE
|
|
|
|
The two other QUOTE options are supported for SFTP, but this was left out for
|
|
unknown reasons!
|
|
|
|
17.5 SSH over HTTPS proxy with more backends
|
|
|
|
The SSH based protocols SFTP and SCP didn't work over HTTPS proxy at
|
|
all until PR https://github.com/curl/curl/pull/6021 brought the
|
|
functionality with the libssh2 backend. Presumably, this support
|
|
can/could be added for the other backends as well.
|
|
|
|
18. Command line tool
|
|
|
|
18.1 sync
|
|
|
|
"curl --sync http://example.com/feed[1-100].rss" or
|
|
"curl --sync http://example.net/{index,calendar,history}.html"
|
|
|
|
Downloads a range or set of URLs using the remote name, but only if the
|
|
remote file is newer than the local file. A Last-Modified HTTP date header
|
|
should also be used to set the mod date on the downloaded file.
|
|
|
|
18.2 glob posts
|
|
|
|
Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
|
|
This is easily scripted though.
|
|
|
|
18.3 prevent file overwriting
|
|
|
|
Add an option that prevents curl from overwriting existing local files. When
|
|
used, and there already is an existing file with the target file name
|
|
(either -O or -o), a number should be appended (and increased if already
|
|
existing). So that index.html becomes first index.html.1 and then
|
|
index.html.2 etc.
|
|
|
|
18.4 --proxycommand
|
|
|
|
Allow the user to make curl run a command and use its stdio to make requests
|
|
and not do any network connection by itself. Example:
|
|
|
|
curl --proxycommand 'ssh pi@raspberrypi.local -W 10.1.1.75 80' \
|
|
http://some/otherwise/unavailable/service.php
|
|
|
|
See https://github.com/curl/curl/issues/4941
|
|
|
|
18.5 UTF-8 filenames in Content-Disposition
|
|
|
|
RFC 6266 documents how UTF-8 names can be passed to a client in the
|
|
Content-Disposition header, and curl does not support this.
|
|
|
|
https://github.com/curl/curl/issues/1888
|
|
|
|
18.6 Option to make -Z merge lined based outputs on stdout
|
|
|
|
When a user requests multiple lined based files using -Z and sends them to
|
|
stdout, curl will not "merge" and send complete lines fine but may very well
|
|
send partial lines from several sources.
|
|
|
|
https://github.com/curl/curl/issues/5175
|
|
|
|
18.7 at least N milliseconds between requests
|
|
|
|
Allow curl command lines issue a lot of request against services that limit
|
|
users to no more than N requests/second or similar. Could be implemented with
|
|
an option asking that at least a certain time has elapsed since the previous
|
|
request before the next one will be performed. Example:
|
|
|
|
$ curl "https://example.com/api?input=[1-1000]" -d yadayada --after 500
|
|
|
|
See https://github.com/curl/curl/issues/3920
|
|
|
|
18.8 Consider convenience options for JSON and XML?
|
|
|
|
Could we add `--xml` or `--json` to add headers needed to call rest API:
|
|
|
|
`--xml` adds -H 'Content-Type: application/xml' -H "Accept: application/xml" and
|
|
`--json` adds -H 'Content-Type: application/json' -H "Accept: application/json"
|
|
|
|
Setting Content-Type when doing a GET or any other method without a body
|
|
would be a bit strange I think - so maybe only add CT for requests with body?
|
|
Maybe plain `--xml` and ` --json` are a bit too brief and generic. Maybe
|
|
`--http-json` etc?
|
|
|
|
See https://github.com/curl/curl/issues/5203
|
|
|
|
18.9 Choose the name of file in braces for complex URLs
|
|
|
|
When using braces to download a list of URLs and you use complicated names
|
|
in the list of alternatives, it could be handy to allow curl to use other
|
|
names when saving.
|
|
|
|
Consider a way to offer that. Possibly like
|
|
{partURL1:name1,partURL2:name2,partURL3:name3} where the name following the
|
|
colon is the output name.
|
|
|
|
See https://github.com/curl/curl/issues/221
|
|
|
|
18.10 improve how curl works in a windows console window
|
|
|
|
If you pull the scrollbar when transferring with curl in a Windows console
|
|
window, the transfer is interrupted and can get disconnected. This can
|
|
probably be improved. See https://github.com/curl/curl/issues/322
|
|
|
|
18.11 Windows: set attribute 'archive' for completed downloads
|
|
|
|
The archive bit (FILE_ATTRIBUTE_ARCHIVE, 0x20) separates files that shall be
|
|
backed up from those that are either not ready or have not changed.
|
|
|
|
Downloads in progress are neither ready to be backed up, nor should they be
|
|
opened by a different process. Only after a download has been completed it's
|
|
sensible to include it in any integer snapshot or backup of the system.
|
|
|
|
See https://github.com/curl/curl/issues/3354
|
|
|
|
18.12 keep running, read instructions from pipe/socket
|
|
|
|
Provide an option that makes curl not exit after the last URL (or even work
|
|
without a given URL), and then make it read instructions passed on a pipe or
|
|
over a socket to make further instructions so that a second subsequent curl
|
|
invoke can talk to the still running instance and ask for transfers to get
|
|
done, and thus maintain its connection pool, DNS cache and more.
|
|
|
|
18.13 Ratelimit or wait between serial requests
|
|
|
|
Consider a command line option that can make curl do multiple serial requests
|
|
slow, potentially with a (random) wait between transfers. There's also a
|
|
proposed set of standard HTTP headers to let servers let the client adapt to
|
|
its rate limits:
|
|
https://www.ietf.org/id/draft-polli-ratelimit-headers-02.html
|
|
|
|
See https://github.com/curl/curl/issues/5406
|
|
|
|
18.14 --dry-run
|
|
|
|
A command line option that makes curl show exactly what it would do and send
|
|
if it would run for real.
|
|
|
|
See https://github.com/curl/curl/issues/5426
|
|
|
|
18.15 --retry should resume
|
|
|
|
When --retry is used and curl actually retries transfer, it should use the
|
|
already transferred data and do a resumed transfer for the rest (when
|
|
possible) so that it doesn't have to transfer the same data again that was
|
|
already transferred before the retry.
|
|
|
|
See https://github.com/curl/curl/issues/1084
|
|
|
|
18.16 send only part of --data
|
|
|
|
When the user only wants to send a small piece of the data provided with
|
|
--data or --data-binary, like when that data is a huge file, consider a way
|
|
to specify that curl should only send a piece of that. One suggested syntax
|
|
would be: "--data-binary @largefile.zip!1073741823-2147483647".
|
|
|
|
See https://github.com/curl/curl/issues/1200
|
|
|
|
18.17 consider file name from the redirected URL with -O ?
|
|
|
|
When a user gives a URL and uses -O, and curl follows a redirect to a new
|
|
URL, the file name is not extracted and used from the newly redirected-to URL
|
|
even if the new URL may have a much more sensible file name.
|
|
|
|
This is clearly documented and helps for security since there's no surprise
|
|
to users which file name that might get overwritten. But maybe a new option
|
|
could allow for this or maybe -J should imply such a treatment as well as -J
|
|
already allows for the server to decide what file name to use so it already
|
|
provides the "may overwrite any file" risk.
|
|
|
|
This is extra tricky if the original URL has no file name part at all since
|
|
then the current code path will error out with an error message, and we can't
|
|
*know* already at that point if curl will be redirected to a URL that has a
|
|
file name...
|
|
|
|
See https://github.com/curl/curl/issues/1241
|
|
|
|
18.18 retry on network is unreachable
|
|
|
|
The --retry option retries transfers on "transient failures". We later added
|
|
--retry-connrefused to also retry for "connection refused" errors.
|
|
|
|
Suggestions have been brought to also allow retry on "network is unreachable"
|
|
errors and while totally reasonable, maybe we should consider a way to make
|
|
this more configurable than to add a new option for every new error people
|
|
want to retry for?
|
|
|
|
https://github.com/curl/curl/issues/1603
|
|
|
|
18.19 expand ~/ in config files
|
|
|
|
For example .curlrc could benefit from being able to do this.
|
|
|
|
See https://github.com/curl/curl/issues/2317
|
|
|
|
18.20 host name sections in config files
|
|
|
|
config files would be more powerful if they could set different
|
|
configurations depending on used URLs, host name or possibly origin. Then a
|
|
default .curlrc could a specific user-agent only when doing requests against
|
|
a certain site.
|
|
|
|
18.21 retry on the redirected-to URL
|
|
|
|
When curl is told to --retry a failed transfer and follows redirects, it
|
|
might get a HTTP 429 response from the redirected-to URL and not the original
|
|
one, which then could make curl decide to rather retry the transfer on that
|
|
URL only instead of the original operation to the original URL.
|
|
|
|
Perhaps extra emphasized if the original transfer is a large POST that
|
|
redirects to a separate GET, and that GET is what gets the 529
|
|
|
|
See https://github.com/curl/curl/issues/5462
|
|
|
|
18.23 Set the modification date on an uploaded file
|
|
|
|
For SFTP and possibly FTP, curl could offer an option to set the
|
|
modification time for the uploaded file.
|
|
|
|
See https://github.com/curl/curl/issues/5768
|
|
|
|
18.24 Use multiple parallel transfers for a single download
|
|
|
|
To enhance transfer speed, downloading a single URL can be split up into
|
|
multiple separate range downloads that get combined into a single final
|
|
result.
|
|
|
|
An ideal implementation would not use a specified number of parallel
|
|
transfers, but curl could:
|
|
- First start getting the full file as transfer A
|
|
- If after N seconds have passed and the transfer is expected to continue for
|
|
M seconds or more, add a new transfer (B) that asks for the second half of
|
|
A's content (and stop A at the middle).
|
|
- If splitting up the work improves the transfer rate, it could then be done
|
|
again. Then again, etc up to a limit.
|
|
|
|
This way, if transfer B fails (because Range: isn't supported) it will let
|
|
transfer A remain the single one. N and M could be set to some sensible
|
|
defaults.
|
|
|
|
See https://github.com/curl/curl/issues/5774
|
|
|
|
18.25 Prevent terminal injection when writing to terminal
|
|
|
|
curl could offer an option to make escape sequence either non-functional or
|
|
avoid cursor moves or similar to reduce the risk of a user getting tricked by
|
|
clever tricks.
|
|
|
|
See https://github.com/curl/curl/issues/6150
|
|
|
|
18.26 Custom progress meter update interval
|
|
|
|
Users who are for example doing large downloads in CI or remote setups might
|
|
want the occasional progress meter update to see that the transfer is
|
|
progressing and hasn't stuck, but they may not appreciate the
|
|
many-times-a-second frequency curl can end up doing it with now.
|
|
|
|
19. Build
|
|
|
|
19.1 roffit
|
|
|
|
Consider extending 'roffit' to produce decent ASCII output, and use that
|
|
instead of (g)nroff when building src/tool_hugehelp.c
|
|
|
|
19.2 Enable PIE and RELRO by default
|
|
|
|
Especially when having programs that execute curl via the command line, PIE
|
|
renders the exploitation of memory corruption vulnerabilities a lot more
|
|
difficult. This can be attributed to the additional information leaks being
|
|
required to conduct a successful attack. RELRO, on the other hand, masks
|
|
different binary sections like the GOT as read-only and thus kills a handful
|
|
of techniques that come in handy when attackers are able to arbitrarily
|
|
overwrite memory. A few tests showed that enabling these features had close
|
|
to no impact, neither on the performance nor on the general functionality of
|
|
curl.
|
|
|
|
19.3 Don't use GNU libtool on OpenBSD
|
|
When compiling curl on OpenBSD with "--enable-debug" it will give linking
|
|
errors when you use GNU libtool. This can be fixed by using the libtool
|
|
provided by OpenBSD itself. However for this the user always needs to invoke
|
|
make with "LIBTOOL=/usr/bin/libtool". It would be nice if the script could
|
|
have some magic to detect if this system is an OpenBSD host and then use the
|
|
OpenBSD libtool instead.
|
|
|
|
See https://github.com/curl/curl/issues/5862
|
|
|
|
19.4 Package curl for Windows in a signed installer
|
|
|
|
See https://github.com/curl/curl/issues/5424
|
|
|
|
20. Test suite
|
|
|
|
20.1 SSL tunnel
|
|
|
|
Make our own version of stunnel for simple port forwarding to enable HTTPS
|
|
and FTP-SSL tests without the stunnel dependency, and it could allow us to
|
|
provide test tools built with either OpenSSL or GnuTLS
|
|
|
|
20.2 nicer lacking perl message
|
|
|
|
If perl wasn't found by the configure script, don't attempt to run the tests
|
|
but explain something nice why it doesn't.
|
|
|
|
20.3 more protocols supported
|
|
|
|
Extend the test suite to include more protocols. The telnet could just do FTP
|
|
or http operations (for which we have test servers).
|
|
|
|
20.4 more platforms supported
|
|
|
|
Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
|
|
fork()s and it should become even more portable.
|
|
|
|
20.5 Add support for concurrent connections
|
|
|
|
Tests 836, 882 and 938 were designed to verify that separate connections
|
|
aren't used when using different login credentials in protocols that
|
|
shouldn't re-use a connection under such circumstances.
|
|
|
|
Unfortunately, ftpserver.pl doesn't appear to support multiple concurrent
|
|
connections. The read while() loop seems to loop until it receives a
|
|
disconnect from the client, where it then enters the waiting for connections
|
|
loop. When the client opens a second connection to the server, the first
|
|
connection hasn't been dropped (unless it has been forced - which we
|
|
shouldn't do in these tests) and thus the wait for connections loop is never
|
|
entered to receive the second connection.
|
|
|
|
20.6 Use the RFC6265 test suite
|
|
|
|
A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at
|
|
https://github.com/abarth/http-state/tree/master/tests
|
|
|
|
It'd be really awesome if someone would write a script/setup that would run
|
|
curl with that test suite and detect deviances. Ideally, that would even be
|
|
incorporated into our regular test suite.
|
|
|
|
20.7 Support LD_PRELOAD on macOS
|
|
|
|
LD_RELOAD doesn't work on macOS, but there are tests which require it to run
|
|
properly. Look into making the preload support in runtests.pl portable such
|
|
that it uses DYLD_INSERT_LIBRARIES on macOS.
|
|
|
|
20.8 Run web-platform-tests url tests
|
|
|
|
Run web-platform-tests url tests and compare results with browsers on wpt.fyi
|
|
|
|
It would help us find issues to fix and help us document where our parser
|
|
differs from the WHATWG URL spec parsers.
|
|
|
|
See https://github.com/curl/curl/issues/4477
|
|
|
|
20.9 Bring back libssh tests on Travis
|
|
|
|
In https://github.com/curl/curl/pull/7012 we remove the libssh builds and
|
|
tests from Travis CI due to them not working. This should be remedied and
|
|
libssh builds be brought back.
|
|
|
|
|
|
21. MQTT
|
|
|
|
21.1 Support rate-limiting
|
|
|
|
The rate-limiting logic is done in the PERFORMING state in multi.c but MQTT
|
|
is not (yet) implemented to use that!
|