This is a cleanup I started to notice a need for during the BinarySink
work. It removes a lot of faffing about casting things to char * or
unsigned char * so that some API will accept them, even though lots of
such APIs really take a plain 'block of raw binary data' argument and
don't care what C thinks the signedness of that data might be - they
may well reinterpret it back and forth internally.
So I've tried to arrange for all the function call APIs that ought to
have a void * (or const void *) to have one, and those that need to do
pointer arithmetic on the parameter internally can cast it back at the
top of the function. That saves endless ad-hoc casts at the call
sites.
In commit 528513dde I absentmindedly replaced a write to the local
variable 'need_size' of drawing_area_setup with a write to
inst->drawing_area_setup_needed, imagining that they had the same
effect. But actually, need_size was doing two jobs and I only replaced
one of them: it was also the variable that indicated that the logical
terminal size had changed and so we had to call term_size() to make
the terminal.c data structures resize themselves appropriately. The
loss of that call also inhibited generation of SIGWINCH.
NFC for the moment, because the bufchain is always specially
constructed to hold exactly the same data that would have been passed
in to the function as a (pointer,length) pair. But this API change
allows get_userpass_input to express the idea that it consumed some
but not all of the data in the bufchain, which means that later on
I'll be able to point the same function at a longer-lived bufchain
containing the full stream of keyboard input and avoid dropping
keystrokes that arrive too quickly after the end of an interactive
password prompt.
Changing the window's font size with Alt-< or Alt-> was not setting
any of the flags that make drawing_area_setup consider itself to have
been non-spuriously called, so the real window would enlarge without
the backing surface also doing so.
In GTK 3.10 and above, high-DPI support is arranged by each window
having a property called a 'scale factor', which translates logical
pixels as seen by most of the GTK API (widget and window sizes and
positions, coordinates in the "draw" event, etc) into the physical
pixels on the screen. This is handled more or less transparently,
except that one side effect is that your Cairo-based drawing code had
better be able to cope with that scaling without getting confused.
PuTTY's isn't, because we do all our serious drawing on a separate
Cairo surface we made ourselves, and then blit subrectangles of that
to the window during updates. This has two bad consequences. Firstly,
our surface has a size derived from what GTK told us the drawing area
size is, i.e. corresponding to GTK's _logical_ pixels, so when the
scale factor is (say) 2, our drawing takes place at half size and then
gets scaled up by the final blit in the draw event, making it look
blurry and unpleasant. Secondly, those final blits seem to end up
offset by half a pixel, so that a second blit over the same
subrectangle doesn't _quite_ completely wipe out the previously
blitted data - so there's a ghostly rectangle left behind everywhere
the cursor has been.
It's not that GTK doesn't _let_ you find out the scale factor; it's
just that it's in an out-of-the-way piece of API that you have to call
specially. So now we do: our backing surface is now created at a pixel
resolution matching the screen's real pixels, and we translate GTK's
scale factor into an ordinary cairo_scale() before we commence
drawing. So we still end up drawing the same text at the same size -
and this strategy also means that non-text elements like cursor
outlines and underlining will be scaled up with the screen DPI rather
than stubbornly staying one physical pixel thick - but now it's nice
and sharp at full screen resolution, and the subrectangle blits in the
draw event are back to affecting the exact set of pixels we expect
them to.
One silly consequence is that, immediately after removing the last
one, I've installed a handler for the GTK "configure-event" signal!
That's because the GTK 3 docs claim that that's how you get notified
that your scale factor has changed at run time (e.g. if you
reconfigure the scale factor of a whole monitor in the GNOME settings
dialog). Actually in practice I seem to find out via the "draw" event
before "configure" bothers to tell me, but now I've got a usefully
idempotent function for 'check whether the scale factor has changed
and sort it out if so', I don't see any harm in calling it from
anywhere it _might_ be useful.
I've been using that signal since the very first commit of this source
file, as a combined way to be notified when the size of the drawing
area changes (typically due to user window resizing actions) and also
when the drawing area is first created and available to be drawn on.
Unfortunately, testing on Ubuntu 18.04, I ran into an oddity, in which
the call to gtk_widget_show(inst->window) in new_session_window() has
the side effect of delivering a spurious configure_event on the
drawing area with size 1x46 pixels. This causes the terminal to resize
itself to 1 column wide, and the mistake isn't rectified until a
followup configure-event arrives after new_session_window returns to
the GTK main loop. But that means terminal output can occur between
those two configure events (the connection-sharing "Reusing a shared
connection to host.name" is a good example), and when it does, it gets
embarrassingly wrapped at one character per line down the left column.
I briefly tried to bodge around this by trying to heuristically guess
which configure events were real and which were spurious, but I have
no faith in that strategy continuing to work. I think a better
approach is to abandon configure-event completely, and move to a
system in which the two purposes I was using it for are handled by two
_different_ GTK signals, namely "size-allocate" (for knowing when we
get resized) and "realize" (for knowing when the drawing area
physically exists for us to start setting up Cairo or GDK machinery).
The result seems to have fixed the silly one-column wrapping bug, and
retained the ability to handle window resizes, on every GTK version I
have conveniently available to test on, including GTK 3 both before
and after these spurious configures started to happen.
GTK 3 PuTTY/pterm has always assumed that if it was compiled with
_support_ for talking to the raw X11 layer underneath GTK and GDK,
then it was entitled to expect that raw X11 layer to exist at all
times, i.e. that GDK_DISPLAY_XDISPLAY would return a meaningful X
display that it could do useful things with. So if you ran it over the
GDK Wayland backend, it would immediately segfault.
Modern GTK applications need to cope with multiple GDK backends at run
time. It's fine for GTK PuTTY to _contain_ the code to find and use
underlying X11 primitives like the display and the X window id, but it
should be prepared to find that it's running on Wayland (or something
else again!) so those functions don't return anything useful - in
which case it should degrade gracefully to the subset of functionality
that can be accessed through backend-independent GTK calls.
Accordingly, I've centralised the use of GDK_DISPLAY_XDISPLAY into a
support function get_x_display() in gtkmisc.c, which starts by
checking that there actually is one first. All previous direct uses of
GDK_*_XDISPLAY now go via that function, and check the result for NULL
afterwards. (To save faffing about calling that function too many
times, I'm also caching the display pointer in more places, and
passing it as an extra argument to various subfunctions, mostly in
gtkfont.c.)
Similarly, the get_windowid() function that retrieves the window id to
put in the environment of pterm's child process has to be prepared for
there not to be a window id.
This isn't a complete fix for all Wayland-related problems. The other
one I'm currently aware of is that the default font is "server:fixed",
which is a bad default now that it won't be available on all backends.
And I expect that further problems will show up with more testing. But
it's a start.
Colin Watson reports that on pre-releases of Ubuntu 18.04, configure
events which don't actually involve a change of window size show up
annoyingly often. Our handling of configure events involves throwing
away the backing Cairo surface, making a fresh blank one, and
scheduling a top-level callback to get terminal.c to do a repaint and
populate the new surface; so a draw event before that callback occurs
causes the window contents to flicker off and on again, not to mention
wasting a lot of time.
The simplest solution is to spot spurious configures, and respond by
not throwing away the previous Cairo surface in the first place.
Except in GTK1 (which doesn't have the former), via a gtkcompat.h
workaround.
Up-to-date GTK3 has deprecated gdk_beep(), causing build failures due
to the default -Werror setting.
Looks as if I haven't retried the GTK1 build for a while, and recent
GTK frontend development has broken it. The selection revamp has
pointed out that GTK1 didn't have the accessor function
gtk_selection_data_get_selection(), the standard GdkAtom value
GDK_SELECTION_CLIPBOARD, or keysyms for alphabetic characters; and
also I had an initialisation of one of my own structure fields
(dp->selparams) accidentally not guarded by the same GTK-versioning
ifdef that controls whether or not it was defined.
This still isn't complete: I also need to add the variable collections
of things like mid-session special commands and saved session names,
and also I need to try to grey out menu items when they're not
applicable. But it's a start.
Just to avoid an endless proliferation of functions too small to see,
I've arranged an enumeration of action ids and a single
app_menu_action function on the receiving end, and in gtkapp.c, a list
macro that means I at least don't have to define the tiny callback
functions and the GActionEntry records by hand and keep them in sync.
This fixes the problem I'd previously noticed, that if you don't
configure the "Command key acts as Meta" setting, then keystrokes like
Command-Q which _ought_ to function as accelerators for the
application menu bar don't.
Turns out that this was for the totally obvious reason: the keyboard
event was still being processed by gtkwin.c's key_event() and
translated via the GTK IM into ordinary keyboard input. If instead I
return FALSE from key_event on detecting that a key event has a
non-Meta-configured Command modifier, then it will go to the next-
level key-event handler inside GTK itself which implements the menu
accelerator behaviour. Another problem ticked off the OS X checklist.
The gtkapp.c menu now has a Copy as well as Paste option; those menu
items, as well as the corresponding ones on the context menu and Copy
All, now address sets of clipboards parametrised between OS X and
ordinary GTK in unix.h. Also I've tweaked the wording of the
context-menu items to not use the X-specific terminology "CLIPBOARD"
on OS X.
I had a segfault on OS X today at Pterm.app shutdown. I wasn't able to
reproduce it in a debugger, but the cause seemed to be that
clipboard_clear called term_deselect (this was from before the patch
series that renamed that function) when inst->term was already NULL.
This must be because a clipboard_data_instance outlived its associated
inst->term, and quite likely its associated inst as well. But we can't
free those structures when a gui_data is freed, because GTK callbacks
will still depend on them; so instead we must have each gui_data keep
a list of active cdis pointing at it, and then at destruction time,
walk along the list nulling out each one's pointer to part of itself.
On all platforms, you can now configure which clipboard the mouse
pastes from, which clipboard Ctrl-Ins and Shift-Ins access, and which
Ctrl-Shift-C and Ctrl-Shift-V access. In each case, the options are:
- nothing at all
- a clipboard which is implicitly written by the act of mouse
selection (the PRIMARY selection on X, CLIP_LOCAL everywhere else)
- the standard clipboard written by explicit copy/paste UI actions
(CLIPBOARD on X, the unique system clipboard elsewhere).
Also, you can control whether selecting text with the mouse _also_
writes to the explicitly accessed clipboard.
The wording of the various messages changes between platforms, but the
basic UI shape is the same everywhere.
All the data fields referring to the selection in 'struct gui_data'
have been pulled out into a separate structure of which there are now
multiple instances, and I've plumbed through what should be the right
pointers and integer ids to everywhere they should go. So now the GTK
front end defines CLIP_PRIMARY and CLIP_CLIPBOARD in place of the
temporary cop-out CLIP_SYSTEM from the previous commit, and copying
and pasting can be done via either one.
The defaults should be the same as before, except that now the non-Mac
versions of the GtkApplication front ends will access CLIP_PRIMARY in
response to most actions but the 'Paste' menu item will paste from
CLIP_CLIPBOARD. (That's mostly just as a demonstration that accessing
multiple clipboards even works.)
This lays some groundwork for making PuTTY's cut and paste handling
more flexible in the area of which clipboard(s) it reads and writes,
if more than one is available on the system.
I've introduced a system of list macros which define an enumeration of
integer clipboard ids, some defined centrally in putty.h (at present
just a CLIP_NULL which never has any text in it, because that seems
like the sort of thing that will come in useful for configuring a
given copy or paste UI action to be ignored) and some defined per
platform. All the front end functions that copy and paste take a
clipboard id, and the Terminal structure is now configured at startup
to tell it which clipboard id it should paste from on a mouse click,
and which it should copy from on a selection.
However, I haven't actually added _real_ support for multiple X11
clipboards, in that the Unix front end supports a single CLIP_SYSTEM
regardless of whether it's in OS X or GTK mode. So this is currently a
NFC refactoring which does nothing but prepare the way for real
changes to come.
Previously, both the Unix and Windows front ends would respond to a
paste action by retrieving data from the system clipboard, converting
it appropriately, _storing_ it in a persistent dynamic data block
inside the front end, and then calling term_do_paste(term), which in
turn would call back to the front end via get_clip() to retrieve the
current contents of that stored data block.
But, as far as I can tell, this was a completely pointless mechanism,
because after a data block was written into this storage area, it
would be immediately used for exactly one paste, and then never
accessed again until the next paste action caused it to be freed and
replaced with a new chunk of pasted data.
So why on earth was it stored persistently at all, and why that
callback mechanism from frontend to terminal back to frontend to
retrieve it for the actual paste action? I have no idea. This change
removes the entire system and replaces it with the completely obvious
alternative: the character-set-converted version of paste data is
allocated in a _local_ variable in the frontend paste functions,
passed directly to term_do_paste which now takes (buffer,length)
parameters, and freed immediately afterwards. get_clip() is gone.
Now 'putty user@host' will do what you wanted on Unix the same way it
always has on Windows.
(Thanks to Geoff Winkless for pointing out this inconsistency. I've
redone his actual patch my way, but he should still be credited for
the inspiration!)
This one's in frontend_keypress(), which is supposed to close the
window on the first keypress after the session inside it terminates
(that is, if your close-on-exit settings haven't made it close already
at that point).
It looks to me as if that behaviour doesn't currently _work_, and
hasn't worked for quite a while (certainly it was broken as of 0.70,
well before I started on this weekend's refactoring), because when the
session terminates we delete inst->ldisc and that's what would
otherwise be calling frontend_keypress. I should probably decide what
to do about that at some point. But for the moment, I'm satisfied to
simply not break this functionality any worse by making it not a
process-global exit :-)
For gtkapp-based tools that will have to stop being a program-fatal
error, so I've turned it into a function called window_setup_error
(which I could in principle reuse for other problems in the long and
tortuous progress of new_session_window), and kept the original
handling in gtkmain.c's implementation of that function while gtkapp.c
does something more sensible with a message box.
Not all gtkwin-based tools use it. Only the ones with one session per
process, which parse a command line describing that session and might
reasonably want to report errors in that command line by writing to
standard error and exiting the program.
In other words, precisely the ones that link in gtkmain.c and not
gtkapp.c. So gtkmain.c is a more sensible place to put that
error-reporting function.
This was one of a handful of remaining places in gtkwin.c where exit()
is called incautiously. Of course, a failure to set up one SSH
connection should only be fatal to that connection, not the whole
process, so really we should be feeding into the connection_fatal
system.
This change requires me to break up the general cleanups in
delete_inst() into two halves: one runs when the error message box is
created, and cleans up the network connection and all the stuff
associated with it, and the other runs when the error message is
dismissed and the window can actually close.
I've also moved it out into gtkwin.c, because it seemed easier to do
the 'find existing instance of this dialog and raise it' dance there
than to split it across source files pointlessly.
Now it has several 'slots', each named for a particular class of
subsidiary dialog box that a session window can have at most one of,
and register_network_prompt_dialog has a more general name and takes
an enum-typed argument identifying a slot. This lets me avoid writing
a zillion annoyingly similar function pairs and corresponding snippets
of cleanup code in delete_inst.
If you close a session window with an associated SSH back end, the
back end may call back to notify_remote_exit() from ssh_free(), which
queues a new top-level callback citing the inst structure we were
about to delete.
We could fix this by introducing a special 'moribund' flag which
inhibits notify_remote_exit from queueing a callback, but far easier
is to move the delete_callbacks_for_context() call to _after_ all
subsidiary things have been cleaned up, so that any last-minute
callbacks they might schedule will be promptly unscheduled again
before they do any damage.
When I switch verify_ssh_host_key() and friends over to creating
non-modal message boxes and returning to the main loop, there will be
a risk that their parent window will need to close for some other
reason while the user hasn't answered the pending question yet. (E.g.
if the user presses the main session window's close button, which will
no longer be a prohibited UI action once the transient dialog is not
modal.)
At that point we need to get rid of the pending dialog box, both for
UI purposes (it would look silly and be confusing to leave it lying
around) and for memory management (if the user subsequently clicks OK
in such a dialog it would probably try to leave its result somewhere
stale).
So now there's a mechanism for gtkwin.c remembering what the current
'network prompt dialog' is, if any (in which category I intend to
include everything triggered from ssh.c's various reasons for asking
crypto-related questions), and cleaning it up when the struct gui_data
it belongs to goes away.
If a dialog box is destroyed by the program before the user has
pressed one of the result-delivering buttons - e.g. because the parent
window closes so the dialog is no longer relevant to anything anyway -
then dlgparam_destroy would never call the client code's provided
callback. That makes sense in terms of the callback wanting to _take
action_ based on the result of the dialog box, but it ignores the
possibility that the callback may simply need to free its own context
structure.
So now dlgparam_destroy always calls the client's callback, even if
the result it passes is negative (meaning 'the user never got round to
pressing any of the dialog-ending buttons'), and all the existing
client callbacks handle the negative-result case by doing nothing
except freeing any allocated memory they might have.
Now, in place of a variadic argument list with four parameters per
button and a terminating NULL, it takes a pointer to a struct which in
turn contains an (array,length) pair of small per-button structures.
In the process I've renamed the function from messagebox() to
message_box(). Partly that was just because it gave me a convenient
way to search the source for calls I hadn't converted yet, but also
I've thought for a while that that missing underscore didn't really
match the rest of my naming.
NFCI. Partly this minor refactor has the virtue that we can reuse the
more common button layouts without having to type them in at multiple
places in the code (and, indeed, I've provided buttons_yn and
buttons_ok for easy reuse, and could easily provide other things like
yesnocancel any time I need them). But mostly it's because I'm about
to split up message_box into multiple functions, and this saves me the
hassle of deciding which ones to make variadic and which to pass an
actual va_list to - particularly since messagebox() used to go over
its variadic argument list twice, which always makes delegating it to
another function that much more annoying.
Now every call to do_config_box is replaced with a call to
create_config_box, which returns immediately having constructed the
new GTK window object, and is passed a callback function which it will
arrange to be called when the dialog terminates (whether by OK or by
Cancel). That callback is now what triggers the construction of a
session window after 'Open' is pressed in the initial config box, or
the actual mid-session reconfiguration action after 'Apply' is pressed
in a Change Settings box.
We were already prepared to ignore the re-selection of 'Change
Settings' from the context menu of a window that already had a Change
Settings box open (and not accidentally create a second config box for
the same window); but now we do slightly better, by finding the
existing config box and un-minimising and raising it, in case the user
had forgotten it was there.
That's a useful featurelet, but not the main purpose of this change.
The mani point, of course, is that now the multi-window GtkApplication
based front ends now don't do anything confusing to the nesting of
gtk_main() when config boxes are involved. Whether you're changing the
settings of one (or more than one) of your already-running sessions,
preparing to start up a new PuTTY connection, or both at once, we stay
in the same top-level instance of gtk_main() and all sessions' top-
level callbacks continue to run sensibly.
This has been logically necessary in principle for ages, but we got
away without it because we just exited the program. But in the multi-
window GtkApplication front ends, we can't get away with that for
ever; we need to be able to free _one_ of our 'struct gui_data'
instances and everything dangling off it (or, at least, everything
that GTK's reference counting system doesn't clean up for us), without
also doing anything global to the process in which that gui_data is
contained.
ignore_sbar is a flag that we set while manually changing the
scrollbar settings, so that when those half-finished changes trigger
GTK event callbacks, we know to ignore them, and wait until we've
finished setting everything up before actually updating the window.
But somehow I had managed to leave the functions that actually _have
the effect_ (at least in GTK1) outside the pair of statements that set
and unset the ignore flag.
The effect was that compiling pterm for GTK1, starting it up, and
issuing a command like 'ls -l' that scrolls off the bottom of the
window would lead to the _top_ half of the ls output being visible,
and the scrollbar at the top of the scrollback rather than the bottom.
Apparently I haven't tested this compile mode in a while: I had a
couple of compile errors due to new code not properly #ifdeffed (the
true-colour mode has to be effectively disabled in the palette-based
GTK1 graphics model) and one for an unused static function
(get_monitor_geometry is only used in GTK2 and above, and with -Werror
that means I mustn't even _define_ it in GTK1).
With these changes, I still didn't get a clean compile unless I also
configured CFLAGS=-std=gnu89, due to the GTK1 headers having an
outdated set of ifdefs to figure out the compiler's semantics of
'inline'. (They seem to expect old-style gcc, which inconveniently
treats 'inline' and 'extern inline' more or less the opposite way
round from the version standardised by C99.)
ATTR_REVERSE was being handled in the front ends, and was causing the
foreground and background colours to be switched. (I'm not completely
sure why I made that design decision; it might be purely historical,
but then again, it might also be because reverse video is one effect
on the fg and bg colours that must still be performed even in unusual
frontend-specific situations like display-driven monochrome mode.)
This affected both explicit reverse video enabled using SGR 7, and
also the transient reverse video arising from mouse selection. Thanks
to Markus Gans for reporting the bug in the latter, which when I
investigated it turned out to affect the former as well.
I've done this on a 'where possible' basis: in Windows paletted mode
(in case anyone is still using an old enough graphics card to need
that!) I simply haven't bothered, and will completely ignore the dim
flag.
Markus Gans points out that some applications which (not at all
unreasonably) don't trust $TERM to tell them the full capabilities of
their terminal will use the sequence "OSC 4 ; nn ; ? BEL" to ask for
the colour-palette value in position nn, and they may not particularly
care _what_ the results are but they will use them to decide whether
the right number of colour palette entries even exist.
Otherwise, moving the cursor (at least in active, filled-cell mode) on
to a true-coloured character cell causes it to vanish completely
because the cell's colours override the thing that differentiates the
cursor.
I'm not sure if any X11 monochrome visuals or Windows paletted display
modes are still around, but just in case they are, we shouldn't
attempt true colour on either kind of display.
This is a heavily rewritten version of a patch originally by Lorenz
Diener; it was tidied up somewhat by Christian Brabandt, and then
tidied up more by me. The basic idea is to add to the termchar
structure a pair of small structs encoding 24-bit RGB values, each
with a flag indicating whether it's turned on; if it is, it overrides
any other specification of fg or bg colour for that character cell.
I've added a test line to colours.txt containing a few example colours
from /usr/share/X11/rgb.txt. In fact it makes quite a good demo to run
the whole of rgb.txt through this treatment, with a command such as
perl -pe 's!^\s*(\d+)\s+(\d+)\s+(\d+).*$!\e[38;2;$1;$2;$3m$&\e[m!' rgb.txt