The specific thing that's strange about it is that it's _not_ an error
even though the compiler is quite justified in being suspicious about
it! The MS APIs define two different structures to have identical
formats.
After Pavel Kryukov pointed out that I have to put _something_ in the
'ssh_key' structure, I thought of an actually useful thing to put
there: why not make it store a pointer to the ssh_keyalg structure?
Then ssh_key becomes a classoid - or perhaps 'traitoid' is a closer
analogy - in the same style as Socket and Plug. And just like Socket
and Plug, I've also arranged a system of wrapper macros that avoid the
need to mention the 'object' whose method you're invoking twice at
each call site.
The new vtable pointer directly replaces an existing field of struct
ec_key (which was usable by several different ssh_keyalgs, so it
already had to store a pointer to the currently active one), and also
replaces the 'alg' field of the ssh2_userkey structure that wraps up a
cryptographic key with its comment field.
I've also taken the opportunity to clean things up a bit in general:
most of the methods now have new and clearer names (e.g. you'd never
know that 'newkey' made a public-only key while 'createkey' made a
public+private key pair unless you went and looked it up, but now
they're called 'new_pub' and 'new_priv' you might be in with a
chance), and I've completely removed the openssh_private_npieces field
after realising that it was duplicating information that is actually
_more_ conveniently obtained by calling the new_priv_openssh method
(formerly openssh_createkey) and throwing away the result.
It's an SSH-1 specific function, so it should have a name reflecting
that, and it didn't. Also it had one of those outdated APIs involving
passing it a client-allocated buffer and size. Now it has a sensible
name, and internally it constructs the output string using a strbuf
and returns it dynamically allocated.
Like the corresponding rewrite of conf serialisation, this affects not
just conf_deserialise itself but also the per-platform filename and
fontspec deserialisers.
I expected this to be nightmarish because WiX 3 doesn't know about the
Windows on Arm platform at all. Fortunately, it turns out that it
doesn't have to: testing on a borrowed machine I find that Windows on
Arm's msiexec.exe is quite happy to take MSIs whose platform field in
the _SummaryInformation table says "Intel".
In fact, that seemed to be _all_ that my test machine would accept: I
tried taking the MSI apart with msidump, putting some other value in
there (e.g. "Arm64" or "Arm") and rebuilding it with msibuild, and all
I got was messages from msiexec saying "This installation package is
not supported by this processor type."
So in fact I just give WiX the same -arch x86 option that I give it
for the real 32-bit x86 Windows installer, but then I point it at the
Arm binaries, and that seems to produce a viable MSI. There is the
unfortunate effect that msiexec forcibly sets the default install
location to 'Program Files (x86)' no matter how I strive to make it
set it any other way, but that's only cosmetic: the programs _run_
just fine no matter which Program Files directory they're installed
into (and I know this won't be the first piece of software that
installs itself into the wrong one). Perhaps some day we can find a
way to do that part better.
On general principles of caution (and of not really wanting to force
Arm machines to emulate x86 code at all), the Arm versions of the
installers have the new DllOk=no flag, so they're pure MSI with no
embedded DLLs.
This arranges that we can build a completely pure MSI file, which
doesn't depend on any native code at install time. We don't lose much
by doing this - only the option to pop up the README file at the end
of installation, and validation of the install directory when you
select it from a file browser.
My immediate use for this is that I want to use it for installers that
will run on Windows on Arm. But it also seems to me like quite an
attractive property in its own right - no native code at all running
at install time would be an _especially_ good guarantee that that code
can't be hijacked by DLLs in the download directory. So I may yet
decide that the features we're losing are not critical to _any_
version of the MSI, and throw them out unconditionally.
Apparently Windows on Arm has an emulator that lets it run x86
binaries without it being obvious, which could get confusing when
people start reporting what version of what they're running where.
(Indeed, it might get confusing for _me_ during early testing.) So now
the Windows builds explicitly state 'x86' or 'Arm' as well as 32- or
64-bit.
Now I've got FROMFIELD, I can rework it so that structures providing
an implementation of the Socket or Plug trait no longer have to have
the vtable pointer as the very first thing in the structure. In
particular, this means that the ProxySocket structure can now directly
implement _both_ the Socket and Plug traits, which is always
_logically_ how it's worked, but previously it had to be implemented
via two separate structs linked to each other.
During last week's work, I made a mistake in which I got the arguments
backwards in one of the key-blob-generating functions - mistakenly
swapped the 'void *' key instance with the 'BinarySink *' output
destination - and I didn't spot the mistake until run time, because in
C you can implicitly convert both to and from void * and so there was
no compile-time failure of type checking.
Now that I've introduced the FROMFIELD macro that downcasts a pointer
to one field of a structure to retrieve a pointer to the whole
structure, I think I might start using that more widely to indicate
this kind of polymorphic subtyping. So now all the public-key
functions in the struct ssh_signkey vtable handle their data instance
in the form of a pointer to a subfield of a new zero-sized structure
type 'ssh_key', which outside the key implementations indicates 'this
is some kind of key instance but it could be of any type'; they
downcast that pointer internally using FROMFIELD in place of the
previous ordinary C cast, and return one by returning &foo->sshk for
whatever foo they've just made up.
The sshk member is not at the beginning of the structure, which means
all those FROMFIELDs and &key->sshk are actually adding and
subtracting an offset. Of course I could have put the member at the
start anyway, but I had the idea that it's actually a feature _not_ to
have the two types start at the same address, because it means you
should notice earlier rather than later if you absentmindedly cast
from one to the other directly rather than by the approved method (in
particular, if you accidentally assign one through a void * and back
without even _noticing_ you perpetrated a cast). In particular, this
enforces that you can't sfree() the thing even once without realising
you should instead of called the right freekey function. (I found
several bugs by this method during initial testing, so I think it's
already proved its worth!)
While I'm here, I've also renamed the vtable structure ssh_signkey to
ssh_keyalg, because it was a confusing name anyway - it describes the
_algorithm_ for handling all keys of that type, not a specific key. So
ssh_keyalg is the collection of code, and ssh_key is one instance of
the data it handles.
This is a cleanup I started to notice a need for during the BinarySink
work. It removes a lot of faffing about casting things to char * or
unsigned char * so that some API will accept them, even though lots of
such APIs really take a plain 'block of raw binary data' argument and
don't care what C thinks the signedness of that data might be - they
may well reinterpret it back and forth internally.
So I've tried to arrange for all the function call APIs that ought to
have a void * (or const void *) to have one, and those that need to do
pointer arithmetic on the parameter internally can cast it back at the
top of the function. That saves endless ad-hoc casts at the call
sites.
There was a time, back when the USA was more vigorously against
cryptography, when we toyed with the idea of having a version of PuTTY
that outsourced its cryptographic primitives to the Microsoft optional
encryption API, which would effectively create a tool that acted like
PuTTY proper on a system with that API installed, but automatically
degraded to being PuTTYtel on a system without, and meanwhile (so went
the theory) it could be moved freely across national borders with
crypto restrictions, because it didn't _contain_ any of the actual
crypto.
I don't recall that we ever got it working at all. And certainly the
vestiges of it here and there in the current code are completely
unworkable - they refer to an 'mscrypto.c' that doesn't even exist,
and the ifdefs in the definitions of structures like RSAKey and
MD5Context are not matched by any corresponding ifdefs in the code. So
I ought to have got round to removing it long ago, in order to avoid
misleading anyone.
This gets rid of yet another huge pile of beating around the bush with
length-counting. Also, this time, the BinarySink in question is a
little more interesting than just being a strbuf every time: on
Windows, where the shared-memory Pageant IPC system imposes a hard
limit on the size of message we can return, I've written a custom
BinarySink implementation that collects up to that much data and then
gives up and sets an overflow flag rather than continue to allocate
memory.
So the main Pageant code no longer has to worry about checking
AGENT_MAX_MSGLEN all the time - and better still, the Unix version of
Pageant is no longer _limited_ by AGENT_MAX_MSGLEN in its outgoing
messages, i.e. it could store a really extra large number of keys if
it needed to. That limitation is now a local feature of Windows
Pageant rather than intrinsic to the whole code base.
(AGENT_MAX_MSGLEN is still used to check incoming agent messages for
sanity, however. Mostly that's because I feel I ought to check them
against _some_ limit, and this one seems sensible enough. Incoming
agent messages are more bounded anyway - they generally don't hold
more than _one_ private key.)
This simplifies the client code both in ssh.c and in the client side
of Pageant.
I've cheated a tiny bit by preparing agent requests in a strbuf that
has space reserved at the front for the packet frame, which makes life
easier for the code that sends them off.
This affects all the functions that generate public and private key
and signature blobs of all kinds, plus ssh_ecdhkex_getpublic. Instead
of returning a bare block of memory and taking an extra 'int *length'
parameter, all these functions now write to a BinarySink, and it's the
caller's job to have prepared an appropriate one where they want the
output to go (usually a strbuf).
The main value of this change is that those blob-generation functions
were chock full of ad-hoc length-counting and data marshalling. You
have only to look at rsa2_{public,private}_blob, for example, to see
the kind of thing I was keen to get rid of!
Now instead of iterating through conf twice in separate functions,
once to count up the size of the serialised data and once to write it
out, I just go through once and dump it all in a strbuf.
(Of course, I could still do a two-pass count-then-allocate approach
easily enough in this system; nothing would stop me writing a
BinarySink implementation that didn't actually store any data and just
counted its size, and then I could choose at each call site whether I
preferred to do it that way.)
In fact, those functions don't even exist any more. The only way to
get data into a primitive hash state is via the new put_* system. Of
course, that means put_data() is a viable replacement for every
previous call to one of the per-hash update functions - but just
mechanically doing that would have missed the opportunity to simplify
a lot of the call sites.
This centralises a few things that multiple header files were
previously defining, and were protecting against each other's
redefinition with ifdefs - small things like structs and typedefs. Now
all those things are in a defs.h which is by definition safe to
include _first_ (out of all the codebase-local headers) and only need
to be defined once.
Lots of functions had really generic names (like 'makekey'), or names
that missed out an important concept (like 'rsakey_pubblob', which
loads a public blob from a _file_ and doesn't generate it from an
in-memory representation at all). Also, the opaque 'int order' that
distinguishes the two formats of public key blob is now a mnemonic
enumeration, and while I'm at it, rsa_ssh1_public_blob takes one of
those as an extra argument.
NFC for the moment, because the bufchain is always specially
constructed to hold exactly the same data that would have been passed
in to the function as a (pointer,length) pair. But this API change
allows get_userpass_input to express the idea that it consumed some
but not all of the data in the bufchain, which means that later on
I'll be able to point the same function at a longer-lived bufchain
containing the full stream of keyboard input and avoid dropping
keystrokes that arrive too quickly after the end of an interactive
password prompt.
NFC: this is a preliminary refactoring, intended to make my life
easier when I start changing around the APIs used to pass user
keyboard input around. The fewer functions even _have_ such an API,
the less I'll have to do at that point.
Thanks to Jiri Kaspar for sending this patch (apart from the new docs
section, which is in my own words), which implements a feature we've
had as a wishlist item ('utf8-plus-vt100') for a long time.
I was actually surprised it was possible to implement it in so few
lines of code! I'd forgotten, or possibly never noticed in the first
place, that even in UTF-8 mode PuTTY not only accepts but still
_processes_ all the ISO 2022 control sequences and shift characters,
and keeps running track of all the same state in term->cset and
term->cset_attrs that it tracks in IS0-2022-enabled modes. It's just
that in UTF-8 mode, at the very last minute when a character+attribute
pair is about to be written into the terminal's character buffer, it
deliberately ignores the contents of those variables.
So all that was needed was a new flag checked at that last moment
which causes it not quite to ignore them after all, and bingo,
utf8-plus-vt100 is supported. And it works no matter which ISO 2022
sequences you're using; whether you're using ESC ( 0 to select the
line drawing set directly into GL and ESC ( B to get back when you're
done, or whether you send a preliminary ESC ( B ESC ) 0 to get GL/GR
to be ASCII and line drawing respectively so you can use SI and SO as
one-byte mode switches thereafter, both work just as well.
This implementation strategy has a couple of consequences, which I
don't think matter very much one way or the other but I document them
just in case they turn out to be important later:
- if an application expecting this mode has already filled your
terminal window with lqqqqqqqqk, then enabling this mode in Change
Settings won't retroactively turn them into the line drawing
characters you wanted, because no memory is preserved in the screen
buffer of what the ISO 2022 state was when they were printed. So
the application still has to do a screen refresh.
- on the other hand, if you already sent the ESC ( 0 or whatever to
put the terminal _into_ line drawing mode, and then you turn on
this mode in Change Settings, you _will_ still be in line drawing
mode, because the system _does_ remember your current ISO 2022
state at all times, whether it's currently applying it to output
printing characters or not.
This is a heavily edited (by me) version of a patch originally due to
Nico Williams and Viktor Dukhovni. Their comments:
* Don't delegate credentials when rekeying unless there's a new TGT
or the old service ticket is nearly expired.
* Check for the above conditions more frequently (every two minutes
by default) and rekey when we would delegate credentials.
* Do not rekey with very short service ticket lifetimes; some GSSAPI
libraries may lose the race to use an almost expired ticket. Adjust
the timing of rekey checks to try to avoid this possibility.
My further comments:
The most interesting thing about this patch to me is that the use of
GSS key exchange causes a switch over to a completely different model
of what host keys are for. This comes from RFC 4462 section 2.1: the
basic idea is that when your session is mostly bidirectionally
authenticated by the GSSAPI exchanges happening in initial kex and
every rekey, host keys become more or less vestigial, and their
remaining purpose is to allow a rekey to happen if the requirements of
the SSH protocol demand it at an awkward moment when the GSS
credentials are not currently available (e.g. timed out and haven't
been renewed yet). As such, there's no need for host keys to be
_permanent_ or to be a reliable identifier of a particular host, and
RFC 4462 allows for the possibility that they might be purely
transient and only for this kind of emergency fallback purpose.
Therefore, once PuTTY has done a GSS key exchange, it disconnects
itself completely from the permanent host key cache functions in
storage.h, and instead switches to a _transient_ host key cache stored
in memory with the lifetime of just that SSH session. That cache is
populated with keys received from the server as a side effect of GSS
kex (via the optional SSH2_MSG_KEXGSS_HOSTKEY message), and used if
later in the session we have to fall back to a non-GSS key exchange.
However, in practice servers we've tested against do not send a host
key in that way, so we also have a fallback method of populating the
transient cache by triggering an immediate non-GSS rekey straight
after userauth (reusing the code path we also use to turn on OpenSSH
delayed encryption without the race condition).
Now its remit is widened to include not just the character-classes
list box, but also anything else related specifically to _copying_
rather than _pasting_, i.e. the terminal -> clipboard direction.
This allows me to move the Windows-specific 'write RTF to clipboard'
option into the newly named Copy panel, which gets it _out_ of the
main Selection panel which had just overflowed due to the new option
added by the previous commit.
(It looks a little asymmetric that there's no corresponding Paste
panel now! But since it would currently contain a single checkbox,
I'll wait until there's more to put in it...)
Ahem. I _spotted_ this in code review, and forgot to make the change
before pushing!
Because it's legitimate for a C implementation to define 'NULL' so
that it expands to just 0, it follows that if you use NULL in a
variadic argument list where the callee will expect to extract a
pointer, you run the risk of putting an int-sized rather than
pointer-sized argument on the list and causing the consumer to get out
of sync. So you have to add an explicit cast.
The PuTTY GUIs (Unix and Windows) maintain an in-memory event log
for display to users as they request. This uses ints for tracking
eventlog size, which is subject to memory exhaustion and (given
enough heap space) overflow attacks by servers (via, e.g., constant
rekeying).
Also a bounded log is more user-friendly. It is rare to want more
than the initial logging and the logging from a few recent rekey
events.
The Windows fix has been tested using Dr. Memory as a valgrind
substitute. No errors corresponding to the affected code showed up.
The Dr. Memory results.txt was split into a file per-error and then
grep Error $(grep -l windlg *)|cut -d: -f3-|sort |uniq -c
was used to compare. Differences arose from different usage of the GUI,
but no error could be traced to the code modified in this commit.
The Unix fix has been tested using valgrind. We don't destroy the
eventlog_stuff eventlog arrays, so we can't be entirely sure that we
don't leak more than we did before, but from code inspection it looks
like we don't (and anyways, if we leaked as much as before, just without
the integer overflow, well, that's still an improvement).
Windows, annoyingly, doesn't seem to have any bit flag in
GetFileAttributes which distinguishes a regular file (which can be
truncated destructively) from a named pipe (which can't).
Fortunately, I'm saved from needing one by the policy I came up with
in the previous commit, in which I don't bother to ask about
truncating a file if it already has zero length, because it would make
no difference anyway. Named pipes do show up as zero-length in
GetFileAttributesEx, so they get treated like zero-length regular
files by this change and it's all good.
On all platforms, you can now configure which clipboard the mouse
pastes from, which clipboard Ctrl-Ins and Shift-Ins access, and which
Ctrl-Shift-C and Ctrl-Shift-V access. In each case, the options are:
- nothing at all
- a clipboard which is implicitly written by the act of mouse
selection (the PRIMARY selection on X, CLIP_LOCAL everywhere else)
- the standard clipboard written by explicit copy/paste UI actions
(CLIPBOARD on X, the unique system clipboard elsewhere).
Also, you can control whether selecting text with the mouse _also_
writes to the explicitly accessed clipboard.
The wording of the various messages changes between platforms, but the
basic UI shape is the same everywhere.
This lays some groundwork for making PuTTY's cut and paste handling
more flexible in the area of which clipboard(s) it reads and writes,
if more than one is available on the system.
I've introduced a system of list macros which define an enumeration of
integer clipboard ids, some defined centrally in putty.h (at present
just a CLIP_NULL which never has any text in it, because that seems
like the sort of thing that will come in useful for configuring a
given copy or paste UI action to be ignored) and some defined per
platform. All the front end functions that copy and paste take a
clipboard id, and the Terminal structure is now configured at startup
to tell it which clipboard id it should paste from on a mouse click,
and which it should copy from on a selection.
However, I haven't actually added _real_ support for multiple X11
clipboards, in that the Unix front end supports a single CLIP_SYSTEM
regardless of whether it's in OS X or GTK mode. So this is currently a
NFC refactoring which does nothing but prepare the way for real
changes to come.
It's actually a function specific to the Windows front end, and has
been all along - but I've only just noticed that no other front end
either uses it or defines it.
Previously, both the Unix and Windows front ends would respond to a
paste action by retrieving data from the system clipboard, converting
it appropriately, _storing_ it in a persistent dynamic data block
inside the front end, and then calling term_do_paste(term), which in
turn would call back to the front end via get_clip() to retrieve the
current contents of that stored data block.
But, as far as I can tell, this was a completely pointless mechanism,
because after a data block was written into this storage area, it
would be immediately used for exactly one paste, and then never
accessed again until the next paste action caused it to be freed and
replaced with a new chunk of pasted data.
So why on earth was it stored persistently at all, and why that
callback mechanism from frontend to terminal back to frontend to
retrieve it for the actual paste action? I have no idea. This change
removes the entire system and replaces it with the completely obvious
alternative: the character-set-converted version of paste data is
allocated in a _local_ variable in the frontend paste functions,
passed directly to term_do_paste which now takes (buffer,length)
parameters, and freed immediately afterwards. get_clip() is gone.
These include an unused variable left over from the command-line
refactoring; an explicit referencing of the module handle for
sspicli.dll which we really do deliberately load and then don't
(directly) use; a missing pointer-type cast in the Windows handle
socket code; and two 32/64 bit integer size mismatches in the types of
functions I was importing from system API DLLs.
The last of those are a bit worrying, and suggest to me that after
going to all that trouble to add type-checking of those runtime
imports in commit 49fb598b0, I might have only checked the resulting
compiler output in a 32-bit build and not a 64-bit one. Oops!
This is another piece of long-overdue refactoring similar to the
recent commit e3796cb77. But where that one dealt with normalisation
of stuff already stored _in_ a Conf by whatever means (including, in
particular, handling a user typing 'username@host.name' into the
Hostname box of the GUI session dialog box), this one deals with
handling argv entries and putting them into the Conf.
This isn't exactly a pure no-functional-change-at-all refactoring. On
the other hand, it isn't a full-on cleanup that completely
rationalises all the user-visible behaviour as well as the code
structure. It's somewhere in between: I've preserved all the behaviour
quirks that I could imagine a reason for having intended, but taken
the opportunity to _not_ faithfully replicate anything I thought was
clearly just a bug.
So, for example, the following inconsistency is carefully preserved:
the command 'plink -load session nextword' treats 'nextword' as a host
name if the loaded session hasn't provided a hostname already, and
otherwise treats 'nextword' as the remote command to execute on the
already-specified remote host, but the same combination of arguments
to GUI PuTTY will _always_ treat 'nextword' as a hostname, overriding
a hostname (if any) in the saved session. That makes some sense to me
because of the different shapes of the overall command lines.
On the other hand, there are two behaviour changes I know of as a
result of this commit: a third argument to GUI PuTTY (after a hostname
and port) now provokes an error message instead of being silently
ignored, and in Plink, if you combine a -P option (specifying a port
number) with the historical comma-separated protocol selection prefix
on the hostname argument (which I'd completely forgotten even existed
until this piece of work), then the -P will now override the selected
protocol's default port number, whereas previously the default port
would win. For example, 'plink -P 12345 telnet,hostname' will now
connect via Telnet to port 12345 instead of to port 23.
There may be scope for removing or rethinking some of the command-
line syntax quirks in the wake of this change. If we do decide to do
anything like that, then hopefully having it all in one place will
make it easier to remove or change things consistently across the
tools.
A more or less identical piece of code to sanitise the CONF_host
string prior to session launch existed in Windows PuTTY and both
Windows and Unix Plink. It's long past time it was centralised.
While I'm here, I've added a couple of extra comments in the
centralised version, including one that - unfortunately - tries _but
fails_ to explain why a string of the form "host.name:1234" doesn't
get the suffix moved into CONF_port the way "user@host" moves the
prefix into CONF_username. Commit c1c1bc471 is the one I'm referring
to in the comment, and unfortunately it has an unexplained one-liner
log message from before I got into the habit of being usefully
verbose.
It's an incoherent concept! There should not be any such thing as an
error box that terminates the entire program but is not modal. If it's
bad enough to terminate the whole program, i.e. _all_ currently live
connections, then there's no point in permitting progress to continue
in windows other than the affected one, because all windows are
affected anyway.
So all previous uses of fatalbox() have become modalfatalbox(), except
those which looked to me as if they shouldn't have been fatal in the
first place, e.g. lingering pieces of error handling in winnet.c which
ought to have had the severity of 'give up on this particular Socket
and close it' rather than 'give up on the ENTIRE UNIVERSE'.
ATTR_REVERSE was being handled in the front ends, and was causing the
foreground and background colours to be switched. (I'm not completely
sure why I made that design decision; it might be purely historical,
but then again, it might also be because reverse video is one effect
on the fg and bg colours that must still be performed even in unusual
frontend-specific situations like display-driven monochrome mode.)
This affected both explicit reverse video enabled using SGR 7, and
also the transient reverse video arising from mouse selection. Thanks
to Markus Gans for reporting the bug in the latter, which when I
investigated it turned out to affect the former as well.
I've done this on a 'where possible' basis: in Windows paletted mode
(in case anyone is still using an old enough graphics card to need
that!) I simply haven't bothered, and will completely ignore the dim
flag.
Markus Gans points out that some applications which (not at all
unreasonably) don't trust $TERM to tell them the full capabilities of
their terminal will use the sequence "OSC 4 ; nn ; ? BEL" to ask for
the colour-palette value in position nn, and they may not particularly
care _what_ the results are but they will use them to decide whether
the right number of colour palette entries even exist.
Otherwise, moving the cursor (at least in active, filled-cell mode) on
to a true-coloured character cell causes it to vanish completely
because the cell's colours override the thing that differentiates the
cursor.
I'm not sure if any X11 monochrome visuals or Windows paletted display
modes are still around, but just in case they are, we shouldn't
attempt true colour on either kind of display.
I know some users don't like any colour _at all_, and we have a
separate option to turn off xterm-style 256-colour sequences, so it
seems remiss not to have an option to disable true colour as well.
This is a heavily rewritten version of a patch originally by Lorenz
Diener; it was tidied up somewhat by Christian Brabandt, and then
tidied up more by me. The basic idea is to add to the termchar
structure a pair of small structs encoding 24-bit RGB values, each
with a flag indicating whether it's turned on; if it is, it overrides
any other specification of fg or bg colour for that character cell.
I've added a test line to colours.txt containing a few example colours
from /usr/share/X11/rgb.txt. In fact it makes quite a good demo to run
the whole of rgb.txt through this treatment, with a command such as
perl -pe 's!^\s*(\d+)\s+(\d+)\s+(\d+).*$!\e[38;2;$1;$2;$3m$&\e[m!' rgb.txt