In the previous commit I introduced the ability for PuTTY to talk to a
server speaking the bare ssh-connection protocol, and listed several
applications for that ability. But none of those applications is any
use without a server that speaks the same protocol. Until now, the
only such server has been the Unix-domain socket presented by an
upstream connection-sharing PuTTY - and we already had a way to
connect to that.
So here's the missing piece: by reusing code that already existed for
the testing SSH server Uppity, I've created a program that will speak
the bare ssh-connection protocol on its standard I/O channels. If you
want to get a shell session over any of the transports I mentioned in
the last commit, this is the program you need to run at the far end of
it.
I have yet to write the documentation, but just in case I forget, the
name stands for 'Pseudo Ssh for Untappable, Separately Authenticated
Networks'.
This is the same protocol that PuTTY's connection sharing has been
using for years, to communicate between the downstream and upstream
PuTTYs. I'm now promoting it to be a first-class member of the
protocols list: if you have a server for it, you can select it in the
GUI or on the command line, and write out a saved session that
specifies it.
This would be completely insecure if you used it as an ordinary
network protocol, of course. Not only is it non-cryptographic and wide
open to eavesdropping and hijacking, but it's not even _authenticated_
- it begins after the userauth phase of SSH. So there isn't even the
mild security theatre of entering an easy-to-eavesdrop password, as
there is with, say, Telnet.
However, that's not what I want to use it for. My aim is to use it for
various specialist and niche purposes, all of which involve speaking
it over an 8-bit-clean data channel that is already set up, secured
and authenticated by other methods. There are lots of examples of such
channels:
- a userv(1) invocation
- the console of a UML kernel
- the stdio channels into other kinds of container, such as Docker
- the 'adb shell' channel (although it seems quite hard to run a
custom binary at the far end of that)
- a pair of pipes between PuTTY and a Cygwin helper process
- and so on.
So this protocol is intended as a convenient way to get a client at
one end of any those to run a shell session at the other end. Unlike
other approaches, it will give you all the SSH-flavoured amenities
you're already used to, like forwarding your SSH agent into the
container, or forwarding selected network ports in or out of it, or
letting it open a window on your X server, or doing SCP/SFTP style
file transfer.
Of course another way to get all those amenities would be to run an
ordinary SSH server over the same channel - but this approach avoids
having to manage a phony password or authentication key, or taking up
your CPU time with pointless crypto.
PSCP and PSFTP can only work over a protocol enough like SSH to be
able to run subsystems (or at the very least a remote command, for
old-style PSCP). Historically we've implemented this restriction by
having them not support any protocol-selection command-line options at
all, and hardwiring them to instantiating ssh_backend.
This commit regularises them to be more like the rest of the tools.
You can select a protocol using the appropriate command-line option,
provided it's a protocol in those tools' backends[] array. And the
setup code will find the BackendVtable to instantiate by the usual
method of calling backend_vt_from_proto.
Currently, this makes essentially no difference: those tools link in
be_ssh.c, which means the only supported backend is SSH. So the effect
is that now -ssh is an accepted option with no effect, instead of
being rejected. But it opens the way to add other protocols that are
SSH-like enough to run file transfer over.
I'm reusing the 'id' string from each BackendVtable as the name of its
command-line option, which means I don't need to manually implement an
option for each new protocol.
The previous 'name' field was awkwardly serving both purposes: it was
a machine-readable identifier for the backend used in the saved
session format, and it was also used in error messages when Plink
wanted to complain that it didn't support a particular backend. Now
there are two separate name fields for those purposes.
Now you can type an absolute pathname (starting with '/') into the
hostname box in Unix GUI PuTTY, or into the hostname slot on the Unix
Plink command line, and the effect will be that PuTTY makes an AF_UNIX
connection to the specified Unix-domain socket in place of a TCP/IP
connection.
I don't _yet_ know of anyone running SSH on a Unix-domain socket, but
at the very least it'll be useful to me for debugging and testing, and
I'm pretty sure there will be other specialist uses sooner or later.
Sometimes, within a switch statement, you want to declare local
variables specific to the handler for one particular case. Until now
I've mostly been writing this in the form
switch (discriminant) {
case SIMPLE:
do stuff;
break;
case COMPLICATED:
{
declare variables;
do stuff;
}
break;
}
which is ugly because the two pieces of essentially similar code
appear at different indent levels, and also inconvenient because you
have less horizontal space available to write the complicated case
handler in - particuarly undesirable because _complicated_ case
handlers are the ones most likely to need all the space they can get!
After encountering a rather nicer idiom in the LLVM source code, and
after a bit of hackery this morning figuring out how to persuade
Emacs's auto-indent to do what I wanted with it, I've decided to move
to an idiom in which the open brace comes right after the case
statement, and the code within it is indented the same as it would
have been without the brace. Then the whole case handler (including
the break) lives inside those braces, and you get something that looks
more like this:
switch (discriminant) {
case SIMPLE:
do stuff;
break;
case COMPLICATED: {
declare variables;
do stuff;
break;
}
}
This commit is a big-bang change that reformats all the complicated
case handlers I could find into the new layout. This is particularly
nice in the Pageant main function, in which almost _every_ case
handler had a bundle of variables and was long and complicated. (In
fact that's what motivated me to get round to this.) Some of the
innermost parts of the terminal escape-sequence handling are also
breathing a bit easier now the horizontal pressure on them is
relieved.
(Also, in a few cases, I was able to remove the extra braces
completely, because the only variable local to the case handler was a
loop variable which our new C99 policy allows me to move into the
initialiser clause of its for statement.)
Viewed with whitespace ignored, this is not too disruptive a change.
Downstream patches that conflict with it may need to be reapplied
using --ignore-whitespace or similar.
This links up the new re-encryption facilities to the Unix Pageant
client-mode command line. Analogously to -d and -D, 'pageant -r key-id'
re-encrypts a single key, and 'pageant -R' re-encrypts everything.
The reencrypt-all request is unusual in its ability to be _partially_
successful. To handle this I've introduced a new return status,
PAGEANT_ACTION_WARNING. At the moment, users of this client code don't
expect it to appear on any request, and I'll make them watch for it
only in the case where I know a particular function can generate it.
These requests parallel 'delete key' and 'delete all keys', but they
work on keys which you originally uploaded in encrypted form: they
cause Pageant to delete only the _decrypted_ form of the key, so that
the next attempt to use the key will need to re-prompt for its
passphrase.
We set it when we started prompting for a passphrase, and never unset
it again when the passphrase prompt either succeeded or failed. Until
now it hasn't mattered, because the only use of the flag is to
suppress duplicate prompts, and once a key has been decrypted, we
never need to prompt for it again, duplicate or otherwise. But that's
about to change, so now this bug needs fixing.
There was a lot of ugly, repetitive, error-prone code that decoded
agent responses in raw data buffers. Now my internal client query
function is returning something that works as a BinarySource, so we
can decode agent responses using the marshal.h system like any other
SSH-formatted message in this code base.
While I'm at it, I've centralised more of the parsing of key lists
(saving repetition in pageant_add_key and pageant_enum_keys),
including merging most of the logic between SSH-1 and SSH-2. The old
functions pageant_get_keylist1 and pageant_get_keylist2 aren't exposed
in pageant.h any more, because they no longer exist in that form, and
also because nothing was using them anyway. (Windows Pageant was using
the separate pageant_nth_ssh2_key() functions that talk directly to
the core, and Unix Pageant was using the more cooked client function
pageant_enum_keys.)
Apparently a handful of calls to that particular function managed to
miss my big-bang conversion to using bool where appropriate, and were
still being called with constants 0 and 1.
We received a report that if you enable Windows 10's high-contrast
mode, the text in PuTTY's installer UI becomes invisible, because it's
displayed in the system default foreground colour against a background
of the white right-hand side of our 'msidialog.bmp' image. That's fine
when the system default fg is black, but high-contrast mode flips it
to white, and now you have white on white text, oops.
Some research in the WiX bug tracker suggests that in Windows 10 you
don't actually have to use BMP files for your installer images any
more: you can use PNG, and PNGs can be transparent. However, someone
else reported that that only works in up-to-date versions of Windows.
And in fact there's no need to go that far. A more elegant answer is
to simply not cover the whole dialog box with our background image in
the first place. I've reduced the size of the background image so that
it _only_ contains the pretty picture on the left-hand side, and omits
the big white rectangle that used to sit under the text. So now the
RHS of the dialog is not covered by any image at all, which has the
same effect as it being covered with a transparent image, except that
it doesn't require transparency support from msiexec. Either way, the
background for the text ends up being the system's default dialog-box
background, in the absence of any images or controls placed on top of
it - so when the high-contrast mode is enabled, it flips to black at
the same time as the text flips to white, and everything works as it
should.
The slight snag is that the pre-cooked WiX UI dialog specifications
let you override the background image itself, but not the Width and
Height fields in the control specifications that refer to them. So if
you just try to drop in a narrow image in the most obvious way, it
gets stretched across the whole window.
But that's not a show-stopper, because we're not 100% dependent on
getting WiX to produce exactly the right output. We already have the
technology to postprocess the MSI _after_ it comes out of WiX: we're
using it to fiddle the target-platform field for the Windows on Arm
installers. So all I had to do was to turn msiplatform.py into a more
general msifixup.py, add a second option to change the width of the
dialog background image, and run it on the x86 installers as well as
the Arm ones.
A PageantSignOp for a not-yet-decrypted key was being linked on to its
key's blocked_requests queue twice, mangling the linked list integrity
and causing segfaults. Now we take care to NULL out the pointers
within the signop to indicate that it isn't currently on the queue,
and check whether it's currently linked before linking or unlinking it.
Reading draft-miller-ssh-agent-04 more carefully, I see that I missed
a few things from the extension-message spec. Firstly, there's an
extension request "query" which is supposed to list all the extensions
you support. Secondly, if you recognise an extension-request name but
are then unable to fulfill the request for some other reason, you're
supposed to return a new kind of failure message that's distinct from
SSH_AGENT_FAILURE, because for extensions, the latter is reserved for
"I don't even know what this extension name means at all".
I've fixed both of those bugs in Pageant by making a centralised map
of known extension names to an enumeration of internal ids, and an
array containing the name for each id. So we can reliably answer the
"query" extension by iterating over that array, and also use the same
array to recognise known extensions up front and give them centralised
processing (in particular, resetting the failure-message type) before
switching on the particular extension index.
This reads data from standard input, turns it into an SSH-2 sign
request, and writes the resulting signature blob to standard output.
I don't really anticipate many uses for this other than testing. But
it _is_ convenient for testing changes to Pageant itself: it lets me
ask for a signature without first having to construct a pointless SSH
session that will accept the relevant key.
This will allow it to be used more conveniently for things other than
key files.
For the moment, the implementation still lives in sshpubk.c. Moving it
out into utils.c or misc.c would be nicer, but it has awkward
dependencies on marshal.c and the per-platform f_open function.
Perhaps another time.
This makes all the new deferred-decryption business actually _useful_
for the first time: you can now load an encrypted key file and then
get a prompt to decrypt it on first use, without Pageant being in the
low-usability debug mode.
Currently, the option to present runtime prompts is enabled if Pageant
is running with an X display detected, regardless of lifetime mode.
I'm not really sure why that's necessary: by my understanding of the C
standard, it shouldn't be. But my observation is that when compiling
with {Address,Leak} Sanitiser enabled, pageant --askpass can somehow
manage to exit without having actually written the passphrase to its
standard output.
This applies to both server modes ('pageant -E key.ppk [lifetime]')
and client mode ('pageant -a -E key.ppk').
I'm not completely confident that the CLI syntax is actually right
yet, but for the moment, it's enough that it _exists_. Now I don't
have to test the encrypted-key loading via manually mocked-up agent
requests.
Until now, all the functions that have to work in both the Pageant
server and a separate client process have been implemented by having
two code paths for every request, one of which marshals an agent
request and passes it to agent_query_synchronous, and the other just
calls one of the internal functions in the Pageant core.
This is already quite ugly, and it'll only get worse when I start
adding more client requests. So here's a simplification: now, there's
only one code path, and we _always_ marshal a wire-format agent
request. When we're the same process as the Pageant server, we pass it
to the actual message handler and let that decode it again, enforcing
by assertion that it's not an asynchronous operation that's going to
delay.
This patch removes a layer of indentation from many functions in the
Pageant client layer, so it's best viewed with whitespace ignored.
On Windows, due to a copy-paste goof, the message that should have
read "Configuring n stop bits" instead ended with "data bits".
While I'm here, I've arranged that the "1 stop bit" case of that
message is in the singular. And then I've done the same thing again on
Unix, because I noticed that message was unconditionally plural too.
Like other 'utils' modules, the point is that sshutils.c has no
external dependencies, so it's safe to include in a tool without
requiring you to bring in a cascade of other modules you didn't really
want.
Right now I'm only planning to use this change in an out-of-tree
experiment, but it's harmless to commit the change itself here.
Now I've got an enum for PlugLogType, it's easier to add things to it.
We were giving a blow-by-blow account of each connection attempt, and
when it failed, saying what went wrong before we moved on to the next
candidate address, but when one finally succeeded, we never logged
_that_. Now we do.
This is a small cleanup that removes a couple of copies of some boring
stubs, in favour of having just one copy that you can link against.
Unix Pageant can't currently use this, because it's in a precarious
state of _nearly_ having a random number generator: it links against
sshprng but not sshrand, and only uses it for the randomised keypress
acknowledgments in the GUI askpass prompt. But that means it does use
uxnoise, unlike the truly randomness-free tools.
There aren't quite as many of these as there are on Unix, but Windows
Plink and PSFTP still share some suspiciously similar-looking code.
Now they're both clients of wincliloop.c.
Unix Plink, Unix Pageant in server mode, Uppity, and the post-
connection form of PSFTP's command-line reading code all had very
similar loops in them, which run a pollwrapper and mediate between
that, timers, and toplevel callbacks. It's long past time the common
code between all of those became a reusable shared routine.
So, this commit introduces uxcliloop.c, and turns all the previous
copies of basically the same loop into a call to cli_main_loop with
various callback functions to configure the parts that differ.
The sets of poll(2) events that we check in order to return SELECT_R
and SELECT_W overlap: to be precise, they have POLLERR in common. So
if an fd signals POLLERR, then pollwrap_get_fd_rwx will respond by
saying that it has both SELECT_R and SELECT_W available on it - even
if the caller had only asked for one of those.
In other words, you can get a spurious SELECT_W notification on an fd
that you never asked for SELECT_W on in the first place. This
definitely isn't what I'd meant that API to do.
In particular, if a socket in the middle of an asynchronous connect()
signals POLLERR, then Unix Plink will call select_result for it with
SELECT_R and then SELECT_W respectively. The former will notice that
it's got an error condition and call plug_closing - and _then_ the
latter will decide that it's writable and set s->connected! The plan
was to only select it for write until it was connected, but this bug
in pollwrap was defeating that plan.
Now pollwrap_get_fd_rwx should only ever return a set of rwx flags
that's a subset of the one that the client asked for via
pollwrap_add_fd_rwx.
Spotted by Leak Sanitiser, while I was investigating the PSFTP /
proftpd issue mentioned in the previous commit (with ASan on as
usual).
The two very similar loops that read PSFTP commands from the
interactive prompt and a batch file differed in one respect: only one
of them remembered to free the command afterwards. Now I've moved the
freeing code out into a subroutine that both loops can use.
Ever since I reworked the SSH code to have multiple internal packet
queues, there's been a long-standing FIXME in ssh_sendbuffer() saying
that we ought to include the data buffered in those queues as part of
reporting how much data is buffered on standard input.
Recently a user reported that 'proftpd', or rather its 'mod_sftp'
add-on that implements an SFTP-only SSH server, exposes a bug related
to that missing piece of code. The xfer_upload system in sftp.c starts
by pushing SFTP write messages into the SSH code for as long as
sftp_sendbuffer() (which ends up at ssh_sendbuffer()) reports that not
too much data is buffered locally. In fact what happens is that all
those messages end up on the packet queues between SSH protocol
layers, so they're not counted by sftp_sendbuffer(), so we just keep
going until there's some other reason to stop.
Usually the reason we stop is because we've filled up the SFTP
channel's SSH-layer window, so we need the server to send us a
WINDOW_ADJUST before we're allowed to send any more data. So we return
to the main event loop and start waiting for reply packets. And when
the window is moderate (e.g. OpenSSH currently seems to present about
2MB), this isn't really noticeable.
But proftpd presents the maximum-size window of 2^32-1 bytes, and as a
result we just keep shovelling more and more packets into the internal
packet queues until PSFTP has grown to 4GB in size, and only then do
we even return to the event loop and start actually sending them down
the network. Moreover, this happens again at rekey time, because while
a rekey is in progress, ssh2transport stops emptying the queue of
outgoing packets sent by its higher layer - so, again, everything just
keeps buffering up somewhere that sftp_sendbuffer can't see it.
But this commit fixes it! Each PacketProtocolLayer now provides a
vtable method for asking how much data it currently has queued. Most
of them share a default implementation which just returns the newly
added total_size field from their pq_out; the exception is
ssh2transport, which also has to account for data queued in its higher
layer. And ssh_sendbuffer() adds that on to the quantity it already
knew about in other locations, to give a more realistic idea of the
currently buffered data.
The queue-node structure shared between PktIn and PktOut now has a
'formal_size' field, which is initialised appropriately by the various
packet constructors. And the PacketQueue structure has a 'total_size'
field which tracks the sum of the formal sizes of all the packets on
the queue, and is automatically updated by the push, pop and
concatenate functions.
No functional change, and nothing uses the new fields yet: this is
infrastructure that will be used in the next commit.
If the agent client code doesn't even manage to read a full response
message at all (for example, because the agent it's talking to is
Pageant running in debug mode and you just ^Ced it or it crashed,
which is what's been happening to me all afternoon), then previously,
the userauth code would loop back round to the top of the main loop
without having actually sent any request, so the client code would
deadlock waiting for a response to nothing.
I've often found it useful that you can make symlinks to Unix-domain
sockets, and then connect() on the symlink path will redirect to the
original socket.
This commit adds an option to Unix Pageant which will make it symlink
its socket path to a link location of your choice. My initial use case
is when running Pageant in debug mode during development: if you run a
new copy of it every few minutes after making a code change, then it's
annoying to have it change its socket path every time so you have to
keep pasting its setup command into your test shell. Not any more! Now
you can run 'pageant --debug --symlink fixed-location', and then your
test shell can point its SSH_AUTH_SOCK at the fixed location all the
time.
There are very likely other use cases too, but that's the one that
motivated me to add the option.
This adds an extension request to the agent protocol (named in our
private namespace, naturally) which allows you to upload a key file in
the form of a string containing an entire .ppk file. If the key is
encrypted, then Pageant stores it in such a way that it will show up
in the key list, and on the first attempt to sign something with it,
prompt for a passphrase (if it can), decrypt the key, and then answer
the request.
There are a lot of rough edges still to deal with, but this is good
enough to have successfully answered one request, so it's a start.
This is the easiest place to implement _something_ that will work as a
runtime passphrase prompt, which means I get to use it to test the
code I'm about to add to the Pageant core to make use of those
prompts. Once that's working, we can think about adding prompts for
the 'proper' usage modes.
The debug-mode passphrase prompts are implemented by simply reading
from standard input, having emitted a log message mentioning that a
prompt is impending. We put standard input into non-echoing mode, but
otherwise don't print any visible prompt (because standard output will
in general receive further log messages, which would break it anyway).
This is only just good enough for initial testing. In particular, it
won't cope if two prompts are in flight at the same time. But good
enough for initial testing is better than nothing!
This begins to head towards the goal of storing a key file encrypted
in Pageant, and decrypting it on demand via a GUI prompt the first
time a client requests a signature from it. That won't be a facility
available in all situations, so we have to be able to return failure
from the prompt.
More precisely, there are two versions of this API, one in
PageantClient and one in PageantListenerClient: the stream
implementation of PageantClient implements the former API and hands it
off to the latter. Windows Pageant has to directly implement both (but
they will end up funnelling to the same function within winpgnt.c).
NFC: for the moment, the new API functions are never called, and every
implementation of them returns failure.
I don't really know why it was still in cmdgen.c at all. There's no
reason it shouldn't live in its own source file, and keep cmdgen.c for
the actual code of the key generation program!
This reworks the cmdgen main program so that it loads the input file
into a LoadedFile right at the start, and then every time it needs to
do something with the contents, it calls one of the API functions
taking a BinarySource instead of one taking a Filename.
The usefulness of this is that now we can read from things that aren't
regular files, and can't be rewound or reopened. In particular, the
filename "-" is now taken (per the usual convention) to mean standard
input.
So now you can pipe a public or private key file into cmdgen's
standard input and have it do something useful. For example, I was
recently experimenting with the SFTP-only SSH server that comes with
'proftpd', which keeps its authorized_keys file in RFC 4716 format
instead of the OpenSSH one-liner format, and I found I wanted to do
grep 'my-key-comment' ~/.ssh/authorized_keys | puttygen -p -
to quickly get hold of my existing public key to put in that file. But
I had to go via a temporary file to make that work, because puttygen
couldn't read from standard input. Next time, it will be able to!