We have to have a CA cert first, because the host will
start using the client cert as soon as it's available,
but it's not functional without a CA cert.
Also removing extra stupid stuff from wait_for_cert --
the connection is now always recycled, which is much simpler.
Signed-off-by: Luke Kanies <luke@madstop.com>
since that method is deprecated.
Conflicts:
CHANGELOG
bin/puppetca
lib/puppet/file_serving/fileset.rb
lib/puppet/network/xmlrpc/client.rb
lib/puppet/type/file/selcontext.rb
spec/unit/file_serving/metadata.rb
spec/unit/type/file.rb
which causes puppet to produce different exit codes depending
on whether there were changes or failures in the transaction.
Signed-off-by: Luke Kanies <luke@madstop.com>
This merges in the new fileserving code -- we're now using
REST to do fileserving, rather than xmlrpc.
Conflicts:
lib/puppet/parameter.rb
lib/puppet/type/file.rb
spec/unit/type/file.rb
The problem was that the mechanism I was using for
passing the node to the compiler was conflicting with
the Indirector::Request's method of handling node
authentication.
Signed-off-by: Luke Kanies <luke@madstop.com>
Added environment awareness to --configprint
Pulled the logic for --configprint --genconfig and --genmanifest out of puppet.rb
Put the logic in lib/puppet/util/settings.rb and refactored it a bit
Added specs for the behavior
Reformated the whole spec file to use nested describe
Added the new method to the executables
The old behavior should be preserved, except for the env is now used
Also added the fixes to make the certhandler tests pass
even when certs exist; I'll deal with the conflict later.
Conflicts:
CHANGELOG
bin/puppetd
lib/puppet/network/http/handler.rb
lib/puppet/network/http/mongrel/rest.rb
spec/integration/indirector/rest.rb
spec/integration/network/server/mongrel.rb
spec/integration/network/server/webrick.rb
spec/unit/network/http/webrick.rb
...as far as I can tell. The client, however, is broken,
since it used the old http_pool/ssl_support stuff, which
no longer works.
I have to port puppetd over to using the new ssl stuff,
then I'll at least be able to verify that the master can
still speak xmlrpc.
The code is much cleaner, and it seems to be mostly
functional, but we have to pick a strategy for signing
the host's certificate on first startup. Also, I haven't
actually done end-to-end testing yet, which needs the certs
working first.
This class provides all of the semantics from puppetca,
and appears to entirely duplicate the behaviour of the existing
executable, with basically all of the code in a library
file, instead of the executable.
As such, I've deleted the test for the executable. We should have
one, but it's not nearly as important.
certificate, and --verify, which uses the external openssl command to verify
the cert against the CA cert (I could not find an option to the builtin Ruby
libraries to do this).
a central module responsible for managing the http pool
(Puppet::Network::HttpPool), and it also handles
setting certificate information. This gets rid of
what were otherwise long chains of method calls,
and it makes the code paths much clearer.
Previously, for example, the configuration terminus that was a
subclass of 'code' would have been stored at
lib/puppet/indirector/code/configuration and would have had
to have been named 'configuration'. Now, the subclass
can be named however the author prefers, and it must be stored
at lib/puppet/indirector/configuration/<name>.rb, where <name>
is the name you've chosen for the terminus type. The name only
matters insomuch as it is used to load the file from disk and
find the appropriate class when asked.
The additional restriction is that the class constant for the terminus
type must have its name as the last word, and the indirection must
be the second to last word. Thus, in our example, we can choose
any class constant that ends with Configuration::Code; given that
there's only one Configuration class at this point, it makes the
most sense to define the class as Puppet::Node::Configuration::Code.
This is somewhat awkward, because of the class's location on disk,
but the only other real option is to autogenerate a
Puppet::Indirector::Configuration class constant, which is, I think,
uglier.
This is the first real pass towards using caching. The `puppet`
executable actually uses the indirection work, instead of
handlers and such (and man! is it cleaner).
Most of this work was a result of trying to get the client-side
story working, with correct yaml caching of configurations, which
means this commit also covers converting configurations to yaml,
which was a much bigger PITA than it needed to be.
I still need to write integration tests, and I also need to cover
the server-side story of a normal configuration retrieval.
instead of a manifest, and removing all of the ambiguity
around whether an interpreter gets its own file specified
or uses the central setting.
Most of the changes are around fixing existing tests to use this new system.
to work. As a result, it involves a lot of integration-level
testing, and a lot of small design changes to make the code
actually work.
In particular, indirections can now have default termini,
so that configurations and facts default to their code terminus
Also, I've removed the ability to manually control whether
ast nodes are used. I might need to add it back in later,
but if so it will be in the form of a global setting,
rather than the previous system of passing it through 10 different
classes. Instead, the parser detects whether there are AST nodes
defined and requires them if so or ignores them if not.
About 75 tests are still failing in the main set of tests,
but it's going to be a long slog to get them working --
there are significant design issues around them, as most of
the failures are a result of tests trying to emulate both the
client and server sides of a connection, which normally would
have different fact termini but in this case must have the same
terminus just because they're in the same process and are global.
The next step, then, is to figure that process out, thus finding a way
to make this all work.
'Puppet::Util::Settings'. This is to clear up
confusion caused by the fact that we now have a
'Configuration' class to model host configurations,
or any set of resources as a "configuration".
I've gone too far down the rabbit hole to turn back now, but the
code is clearly getting more centralized around the Configuration
class, which is the goal.
Things are currently a bit muddy between recursion, dynamic resource
generation, transactions, and the configuration, and I don't expect
to be able to clear it up much until we rewrite all of the tests
for the Transaction class, since that is when we'll actually be
setting its behaviour.
At this point, Files (which are currently the only resources that
generate other resources) are responsible for adding their edges
to the relationship graph. This puts them knowing more than I would
like about how the relationship graph works, but it'll have to do for now.
There are still failing tests, but files seem to work again. Now to
go through the rest of the tests and make them work.