In the new setup, all per-interface DOM binding files are exported into
mozilla/dom. General files not specific to an interface are also exported into
mozilla/dom.
In terms of namespaces, most things now live in mozilla::dom. Each interface
Foo that has generated code has a mozilla::dom::FooBinding namespace for said
generated code (and possibly a mozilla::bindings::FooBinding_workers if there's
separate codegen for workers).
IDL enums are a bit weird: since the name of the enum and the names of its
entries all end up in the same namespace, we still generate a C++ namespace
with the name of the IDL enum type with "Values" appended to it, with a
::valuelist inside for the actual C++ enum. We then typedef
EnumFooValues::valuelist to EnumFoo. That makes it a bit more difficult to
refer to the values, but means that values from different enums don't collide
with each other.
The enums with the proto and constructor IDs in them now live under the
mozilla::dom::prototypes and mozilla::dom::constructors namespaces respectively.
Again, this lets us deal sanely with the whole "enum value names are flattened
into the namespace the enum is in" deal.
The main benefit of this setup (and the reason "Binding" got appended to the
per-interface namespaces) is that this way "using mozilla::dom" should Just
Work for consumers and still allow C++ code to sanely use the IDL interface
names for concrete classes, which is fairly desirable.
--HG--
rename : dom/bindings/Utils.cpp => dom/bindings/BindingUtils.cpp
rename : dom/bindings/Utils.h => dom/bindings/BindingUtils.h
The current situation seems incorrect, especially given the behavior of CrossOriginWrapper and XrayProxy. Currently it doesn't matter, but it probably will in the future.
This isn't an issue right now, since it can't ever happen outside of sandboxes, which content can't use. But if it could, it could get a pure CrossCompartmentWrapper to a Location object, which is bad.
I'm adding asserts about when we do and don't have a Location object behind the wrapper, and this case was hitting them. What we do here doesn't so much matter given how this stuff all works. On the one hand, statically using a restrictive policy is slightly more defense-in-depth. On the other hand, if this stuff is broken we're screwed in much more serious ways than content reading chrome locations, and using a consistent wrapper scheme allows us to make stronger asserts and assumptions.
I opted for stronger assumptions and more understandable security code. If Blake feels strongly though, I could go the other way and sprinkle '|| isChrome(obj)' throughout the asserts though.
Currently the GC finalizes on the background thread only objects with null
JSClass::finalize. However, this implies that any object that uses
JS_FinalizeStub for the finalizer would be prevented from the background
finalization.
To fix this the patch removes JS_FinalizeStub replacing it with NULL in all
cases when the class has no custom finalizer. For style consistency the patch
also removed the usage of JSCLASS_NO_OPTIONAL_MEMBERS in the static
declarations as the compiler fills the missing fields with null in any cases.
In just 2 cases where JSPrincipals::codebase is used it can be reconstructed from the values stored in the associated nsJSPrincipal. In addition the patch makes nsJSprincipals to inherit both from nsIPrincipal and JSPrincipals allowing to use static_cast to convert between nsIPrincipal and JSPrincipals pointers and to drop many cases of manual JSPrincipal reference counting.
This removes the inclusion from xpcprivate.h, and adds the include to XPConnect
files that still need it, along with notes to clarify what these files need
from the include. These notes will be removed while fixing bug 677079.