diff --git a/_content/doc/articles/go_command.html b/_content/doc/articles/go_command.html new file mode 100644 index 00000000..5b6fd4d2 --- /dev/null +++ b/_content/doc/articles/go_command.html @@ -0,0 +1,254 @@ + + +
The Go distribution includes a command, named
+"go
", that
+automates the downloading, building, installation, and testing of Go packages
+and commands. This document talks about why we wrote a new command, what it
+is, what it's not, and how to use it.
You might have seen early Go talks in which Rob Pike jokes that the idea +for Go arose while waiting for a large Google server to compile. That +really was the motivation for Go: to build a language that worked well +for building the large software that Google writes and runs. It was +clear from the start that such a language must provide a way to +express dependencies between code libraries clearly, hence the package +grouping and the explicit import blocks. It was also clear from the +start that you might want arbitrary syntax for describing the code +being imported; this is why import paths are string literals.
+ +An explicit goal for Go from the beginning was to be able to build Go +code using only the information found in the source itself, not +needing to write a makefile or one of the many modern replacements for +makefiles. If Go needed a configuration file to explain how to build +your program, then Go would have failed.
+ +At first, there was no Go compiler, and the initial development +focused on building one and then building libraries for it. For +expedience, we postponed the automation of building Go code by using +make and writing makefiles. When compiling a single package involved +multiple invocations of the Go compiler, we even used a program to +write the makefiles for us. You can find it if you dig through the +repository history.
+ +The purpose of the new go command is our return to this ideal, that Go +programs should compile without configuration or additional effort on +the part of the developer beyond writing the necessary import +statements.
+ +The way to achieve the simplicity of a configuration-free system is to
+establish conventions. The system works only to the extent that those conventions
+are followed. When we first launched Go, many people published packages that
+had to be installed in certain places, under certain names, using certain build
+tools, in order to be used. That's understandable: that's the way it works in
+most other languages. Over the last few years we consistently reminded people
+about the goinstall
command
+(now replaced by go get
)
+and its conventions: first, that the import path is derived in a known way from
+the URL of the source code; second, that the place to store the sources in
+the local file system is derived in a known way from the import path; third,
+that each directory in a source tree corresponds to a single package; and
+fourth, that the package is built using only information in the source code.
+Today, the vast majority of packages follow these conventions.
+The Go ecosystem is simpler and more powerful as a result.
We received many requests to allow a makefile in a package directory to +provide just a little extra configuration beyond what's in the source code. +But that would have introduced new rules. Because we did not accede to such +requests, we were able to write the go command and eliminate our use of make +or any other build system.
+ +It is important to understand that the go command is not a general +build tool. It cannot be configured and it does not attempt to build +anything but Go packages. These are important simplifying +assumptions: they simplify not only the implementation but also, more +important, the use of the tool itself.
+ +The go
command requires that code adheres to a few key,
+well-established conventions.
First, the import path is derived in a known way from the URL of the
+source code. For Bitbucket, GitHub, Google Code, and Launchpad, the
+root directory of the repository is identified by the repository's
+main URL, without the http://
prefix. Subdirectories are named by
+adding to that path.
+For example, the Go example programs are obtained by running
+git clone https://github.com/golang/example ++ +
and thus the import path for the root directory of that repository is
+"github.com/golang/example
".
+The stringutil
+package is stored in a subdirectory, so its import path is
+"github.com/golang/example/stringutil
".
These paths are on the long side, but in exchange we get an +automatically managed name space for import paths and the ability for +a tool like the go command to look at an unfamiliar import path and +deduce where to obtain the source code.
+ +Second, the place to store sources in the local file system is derived
+in a known way from the import path, specifically
+$GOPATH/src/<import-path>
.
+If unset, $GOPATH
defaults to a subdirectory
+named go
in the user's home directory.
+If $GOPATH
is set to a list of paths, the go command tries
+<dir>/src/<import-path>
for each of the directories in
+that list.
+
Each of those trees contains, by convention, a top-level directory named
+"bin
", for holding compiled executables, and a top-level directory
+named "pkg
", for holding compiled packages that can be imported,
+and the "src
" directory, for holding package source files.
+Imposing this structure lets us keep each of these directory trees
+self-contained: the compiled form and the sources are always near each
+other.
These naming conventions also let us work in the reverse direction, +from a directory name to its import path. This mapping is important +for many of the go command's subcommands, as we'll see below.
+ +Third, each directory in a source tree corresponds to a single +package. By restricting a directory to a single package, we don't have +to create hybrid import paths that specify first the directory and +then the package within that directory. Also, most file management +tools and UIs work on directories as fundamental units. Tying the +fundamental Go unit—the package—to file system structure means +that file system tools become Go package tools. Copying, moving, or +deleting a package corresponds to copying, moving, or deleting a +directory.
+ +Fourth, each package is built using only the information present in +the source files. This makes it much more likely that the tool will +be able to adapt to changing build environments and conditions. For +example, if we allowed extra configuration such as compiler flags or +command line recipes, then that configuration would need to be updated +each time the build tools changed; it would also be inherently tied +to the use of a specific toolchain.
+ +Finally, a quick tour of how to use the go command.
+As mentioned above, the default $GOPATH
on Unix is $HOME/go
.
+We'll store our programs there.
+To use a different location, you can set $GOPATH
;
+see How to Write Go Code for details.
+
+
We first add some source code. Suppose we want to use
+the indexing library from the codesearch project along with a left-leaning
+red-black tree. We can install both with the "go get
"
+subcommand:
+$ go get github.com/google/codesearch/index +$ go get github.com/petar/GoLLRB/llrb +$ ++ +
Both of these projects are now downloaded and installed into $HOME/go
,
+which contains the two directories
+src/github.com/google/codesearch/index/
and
+src/github.com/petar/GoLLRB/llrb/
, along with the compiled
+packages (in pkg/
) for those libraries and their dependencies.
Because we used version control systems (Mercurial and Git) to check
+out the sources, the source tree also contains the other files in the
+corresponding repositories, such as related packages. The "go list
"
+subcommand lists the import paths corresponding to its arguments, and
+the pattern "./...
" means start in the current directory
+("./
") and find all packages below that directory
+("...
"):
+$ cd $HOME/go/src +$ go list ./... +github.com/google/codesearch/cmd/cgrep +github.com/google/codesearch/cmd/cindex +github.com/google/codesearch/cmd/csearch +github.com/google/codesearch/index +github.com/google/codesearch/regexp +github.com/google/codesearch/sparse +github.com/petar/GoLLRB/example +github.com/petar/GoLLRB/llrb +$ ++ +
We can also test those packages:
+ ++$ go test ./... +? github.com/google/codesearch/cmd/cgrep [no test files] +? github.com/google/codesearch/cmd/cindex [no test files] +? github.com/google/codesearch/cmd/csearch [no test files] +ok github.com/google/codesearch/index 0.203s +ok github.com/google/codesearch/regexp 0.017s +? github.com/google/codesearch/sparse [no test files] +? github.com/petar/GoLLRB/example [no test files] +ok github.com/petar/GoLLRB/llrb 0.231s +$ ++ +
If a go subcommand is invoked with no paths listed, it operates on the +current directory:
+ ++$ cd github.com/google/codesearch/regexp +$ go list +github.com/google/codesearch/regexp +$ go test -v +=== RUN TestNstateEnc +--- PASS: TestNstateEnc (0.00s) +=== RUN TestMatch +--- PASS: TestMatch (0.00s) +=== RUN TestGrep +--- PASS: TestGrep (0.00s) +PASS +ok github.com/google/codesearch/regexp 0.018s +$ go install +$ ++ +
That "go install
" subcommand installs the latest copy of the
+package into the pkg directory. Because the go command can analyze the
+dependency graph, "go install
" also installs any packages that
+this package imports but that are out of date, recursively.
Notice that "go install
" was able to determine the name of the
+import path for the package in the current directory, because of the convention
+for directory naming. It would be a little more convenient if we could pick
+the name of the directory where we kept source code, and we probably wouldn't
+pick such a long name, but that ability would require additional configuration
+and complexity in the tool. Typing an extra directory name or two is a small
+price to pay for the increased simplicity and power.
As mentioned above, the go command is not a general-purpose build
+tool.
+In particular, it does not have any facility for generating Go
+source files during a build, although it does provide
+go
+generate
,
+which can automate the creation of Go files before the build.
+For more advanced build setups, you may need to write a
+makefile (or a configuration file for the build tool of your choice)
+to run whatever tool creates the Go files and then check those generated source files
+into your repository. This is more work for you, the package author,
+but it is significantly less work for your users, who can use
+"go get
" without needing to obtain and build
+any additional tools.
For more information, read How to Write Go Code +and see the go command documentation.
diff --git a/_content/doc/articles/index.html b/_content/doc/articles/index.html new file mode 100644 index 00000000..9ddd6697 --- /dev/null +++ b/_content/doc/articles/index.html @@ -0,0 +1,8 @@ + + ++See the Documents page and the +Blog index for a complete list of Go articles. +
diff --git a/_content/doc/articles/race_detector.html b/_content/doc/articles/race_detector.html new file mode 100644 index 00000000..09188c15 --- /dev/null +++ b/_content/doc/articles/race_detector.html @@ -0,0 +1,440 @@ + + ++Data races are among the most common and hardest to debug types of bugs in concurrent systems. +A data race occurs when two goroutines access the same variable concurrently and at least one of the accesses is a write. +See the The Go Memory Model for details. +
+ ++Here is an example of a data race that can lead to crashes and memory corruption: +
+ ++func main() { + c := make(chan bool) + m := make(map[string]string) + go func() { + m["1"] = "a" // First conflicting access. + c <- true + }() + m["2"] = "b" // Second conflicting access. + <-c + for k, v := range m { + fmt.Println(k, v) + } +} ++ +
+To help diagnose such bugs, Go includes a built-in data race detector.
+To use it, add the -race
flag to the go command:
+
+$ go test -race mypkg // to test the package +$ go run -race mysrc.go // to run the source file +$ go build -race mycmd // to build the command +$ go install -race mypkg // to install the package ++ +
+When the race detector finds a data race in the program, it prints a report. +The report contains stack traces for conflicting accesses, as well as stacks where the involved goroutines were created. +Here is an example: +
+ ++WARNING: DATA RACE +Read by goroutine 185: + net.(*pollServer).AddFD() + src/net/fd_unix.go:89 +0x398 + net.(*pollServer).WaitWrite() + src/net/fd_unix.go:247 +0x45 + net.(*netFD).Write() + src/net/fd_unix.go:540 +0x4d4 + net.(*conn).Write() + src/net/net.go:129 +0x101 + net.func·060() + src/net/timeout_test.go:603 +0xaf + +Previous write by goroutine 184: + net.setWriteDeadline() + src/net/sockopt_posix.go:135 +0xdf + net.setDeadline() + src/net/sockopt_posix.go:144 +0x9c + net.(*conn).SetDeadline() + src/net/net.go:161 +0xe3 + net.func·061() + src/net/timeout_test.go:616 +0x3ed + +Goroutine 185 (running) created at: + net.func·061() + src/net/timeout_test.go:609 +0x288 + +Goroutine 184 (running) created at: + net.TestProlongTimeout() + src/net/timeout_test.go:618 +0x298 + testing.tRunner() + src/testing/testing.go:301 +0xe8 ++ +
+The GORACE
environment variable sets race detector options.
+The format is:
+
+GORACE="option1=val1 option2=val2" ++ +
+The options are: +
+ +log_path
(default stderr
): The race detector writes
+its report to a file named log_path.pid
.
+The special names stdout
+and stderr
cause reports to be written to standard output and
+standard error, respectively.
+exitcode
(default 66
): The exit status to use when
+exiting after a detected race.
+strip_path_prefix
(default ""
): Strip this prefix
+from all reported file paths, to make reports more concise.
+history_size
(default 1
): The per-goroutine memory
+access history is 32K * 2**history_size elements
.
+Increasing this value can avoid a "failed to restore the stack" error in reports, at the
+cost of increased memory usage.
+halt_on_error
(default 0
): Controls whether the program
+exits after reporting first data race.
+atexit_sleep_ms
(default 1000
): Amount of milliseconds
+to sleep in the main goroutine before exiting.
++Example: +
+ ++$ GORACE="log_path=/tmp/race/report strip_path_prefix=/my/go/sources/" go test -race ++ +
+When you build with -race
flag, the go
command defines additional
+build tag race
.
+You can use the tag to exclude some code and tests when running the race detector.
+Some examples:
+
+// +build !race + +package foo + +// The test contains a data race. See issue 123. +func TestFoo(t *testing.T) { + // ... +} + +// The test fails under the race detector due to timeouts. +func TestBar(t *testing.T) { + // ... +} + +// The test takes too long under the race detector. +func TestBaz(t *testing.T) { + // ... +} ++ +
+To start, run your tests using the race detector (go test -race
).
+The race detector only finds races that happen at runtime, so it can't find
+races in code paths that are not executed.
+If your tests have incomplete coverage,
+you may find more races by running a binary built with -race
under a realistic
+workload.
+
+Here are some typical data races. All of them can be detected with the race detector. +
+ ++func main() { + var wg sync.WaitGroup + wg.Add(5) + for i := 0; i < 5; i++ { + go func() { + fmt.Println(i) // Not the 'i' you are looking for. + wg.Done() + }() + } + wg.Wait() +} ++ +
+The variable i
in the function literal is the same variable used by the loop, so
+the read in the goroutine races with the loop increment.
+(This program typically prints 55555, not 01234.)
+The program can be fixed by making a copy of the variable:
+
+func main() { + var wg sync.WaitGroup + wg.Add(5) + for i := 0; i < 5; i++ { + go func(j int) { + fmt.Println(j) // Good. Read local copy of the loop counter. + wg.Done() + }(i) + } + wg.Wait() +} ++ +
+// ParallelWrite writes data to file1 and file2, returns the errors. +func ParallelWrite(data []byte) chan error { + res := make(chan error, 2) + f1, err := os.Create("file1") + if err != nil { + res <- err + } else { + go func() { + // This err is shared with the main goroutine, + // so the write races with the write below. + _, err = f1.Write(data) + res <- err + f1.Close() + }() + } + f2, err := os.Create("file2") // The second conflicting write to err. + if err != nil { + res <- err + } else { + go func() { + _, err = f2.Write(data) + res <- err + f2.Close() + }() + } + return res +} ++ +
+The fix is to introduce new variables in the goroutines (note the use of :=
):
+
+ ... + _, err := f1.Write(data) + ... + _, err := f2.Write(data) + ... ++ +
+If the following code is called from several goroutines, it leads to races on the service
map.
+Concurrent reads and writes of the same map are not safe:
+
+var service map[string]net.Addr + +func RegisterService(name string, addr net.Addr) { + service[name] = addr +} + +func LookupService(name string) net.Addr { + return service[name] +} ++ +
+To make the code safe, protect the accesses with a mutex: +
+ ++var ( + service map[string]net.Addr + serviceMu sync.Mutex +) + +func RegisterService(name string, addr net.Addr) { + serviceMu.Lock() + defer serviceMu.Unlock() + service[name] = addr +} + +func LookupService(name string) net.Addr { + serviceMu.Lock() + defer serviceMu.Unlock() + return service[name] +} ++ +
+Data races can happen on variables of primitive types as well (bool
, int
, int64
, etc.),
+as in this example:
+
+type Watchdog struct{ last int64 } + +func (w *Watchdog) KeepAlive() { + w.last = time.Now().UnixNano() // First conflicting access. +} + +func (w *Watchdog) Start() { + go func() { + for { + time.Sleep(time.Second) + // Second conflicting access. + if w.last < time.Now().Add(-10*time.Second).UnixNano() { + fmt.Println("No keepalives for 10 seconds. Dying.") + os.Exit(1) + } + } + }() +} ++ +
+Even such "innocent" data races can lead to hard-to-debug problems caused by +non-atomicity of the memory accesses, +interference with compiler optimizations, +or reordering issues accessing processor memory . +
+ +
+A typical fix for this race is to use a channel or a mutex.
+To preserve the lock-free behavior, one can also use the
+sync/atomic
package.
+
+type Watchdog struct{ last int64 } + +func (w *Watchdog) KeepAlive() { + atomic.StoreInt64(&w.last, time.Now().UnixNano()) +} + +func (w *Watchdog) Start() { + go func() { + for { + time.Sleep(time.Second) + if atomic.LoadInt64(&w.last) < time.Now().Add(-10*time.Second).UnixNano() { + fmt.Println("No keepalives for 10 seconds. Dying.") + os.Exit(1) + } + } + }() +} ++ +
+As this example demonstrates, unsynchronized send and close operations +on the same channel can also be a race condition: +
+ ++c := make(chan struct{}) // or buffered channel + +// The race detector cannot derive the happens before relation +// for the following send and close operations. These two operations +// are unsynchronized and happen concurrently. +go func() { c <- struct{}{} }() +close(c) ++ +
+According to the Go memory model, a send on a channel happens before +the corresponding receive from that channel completes. To synchronize +send and close operations, use a receive operation that guarantees +the send is done before the close: +
+ ++c := make(chan struct{}) // or buffered channel + +go func() { c <- struct{}{} }() +<-c +close(c) ++ +
+ The race detector runs on
+ linux/amd64
, linux/ppc64le
,
+ linux/arm64
, freebsd/amd64
,
+ netbsd/amd64
, darwin/amd64
,
+ darwin/arm64
, and windows/amd64
.
+
+The cost of race detection varies by program, but for a typical program, memory +usage may increase by 5-10x and execution time by 2-20x. +
+ +
+The race detector currently allocates an extra 8 bytes per defer
+and recover
statement. Those extra allocations are not recovered until the goroutine
+exits. This means that if you have a long-running goroutine that is
+periodically issuing defer
and recover
calls,
+the program memory usage may grow without bound. These memory allocations
+will not show up in the output of runtime.ReadMemStats
or
+runtime/pprof
.
+
+Covered in this tutorial: +
+net/http
package to build web applications
+html/template
package to process HTML templatesregexp
package to validate user input+Assumed knowledge: +
+
+At present, you need to have a FreeBSD, Linux, OS X, or Windows machine to run Go.
+We will use $
to represent the command prompt.
+
+Install Go (see the Installation Instructions). +
+ +
+Make a new directory for this tutorial inside your GOPATH
and cd to it:
+
+$ mkdir gowiki +$ cd gowiki ++ +
+Create a file named wiki.go
, open it in your favorite editor, and
+add the following lines:
+
+package main + +import ( + "fmt" + "io/ioutil" +) ++ +
+We import the fmt
and ioutil
packages from the Go
+standard library. Later, as we implement additional functionality, we will
+add more packages to this import
declaration.
+
+Let's start by defining the data structures. A wiki consists of a series of
+interconnected pages, each of which has a title and a body (the page content).
+Here, we define Page
as a struct with two fields representing
+the title and body.
+
+The type []byte
means "a byte
slice".
+(See Slices: usage and
+internals for more on slices.)
+The Body
element is a []byte
rather than
+string
because that is the type expected by the io
+libraries we will use, as you'll see below.
+
+The Page
struct describes how page data will be stored in memory.
+But what about persistent storage? We can address that by creating a
+save
method on Page
:
+
+This method's signature reads: "This is a method named save
that
+takes as its receiver p
, a pointer to Page
. It takes
+no parameters, and returns a value of type error
."
+
+This method will save the Page
's Body
to a text
+file. For simplicity, we will use the Title
as the file name.
+
+The save
method returns an error
value because
+that is the return type of WriteFile
(a standard library function
+that writes a byte slice to a file). The save
method returns the
+error value, to let the application handle it should anything go wrong while
+writing the file. If all goes well, Page.save()
will return
+nil
(the zero-value for pointers, interfaces, and some other
+types).
+
+The octal integer literal 0600
, passed as the third parameter to
+WriteFile
, indicates that the file should be created with
+read-write permissions for the current user only. (See the Unix man page
+open(2)
for details.)
+
+In addition to saving pages, we will want to load pages, too: +
+ +{{code "doc/articles/wiki/part1-noerror.go" `/^func loadPage/` `/^}/`}} + +
+The function loadPage
constructs the file name from the title
+parameter, reads the file's contents into a new variable body
, and
+returns a pointer to a Page
literal constructed with the proper
+title and body values.
+
+Functions can return multiple values. The standard library function
+io.ReadFile
returns []byte
and error
.
+In loadPage
, error isn't being handled yet; the "blank identifier"
+represented by the underscore (_
) symbol is used to throw away the
+error return value (in essence, assigning the value to nothing).
+
+But what happens if ReadFile
encounters an error? For example,
+the file might not exist. We should not ignore such errors. Let's modify the
+function to return *Page
and error
.
+
+Callers of this function can now check the second parameter; if it is
+nil
then it has successfully loaded a Page. If not, it will be an
+error
that can be handled by the caller (see the
+language specification for details).
+
+At this point we have a simple data structure and the ability to save to and
+load from a file. Let's write a main
function to test what we've
+written:
+
+After compiling and executing this code, a file named TestPage.txt
+would be created, containing the contents of p1
. The file would
+then be read into the struct p2
, and its Body
element
+printed to the screen.
+
+You can compile and run the program like this: +
+ ++$ go build wiki.go +$ ./wiki +This is a sample Page. ++ +
+(If you're using Windows you must type "wiki
" without the
+"./
" to run the program.)
+
+Click here to view the code we've written so far. +
+ +net/http
package (an interlude)+Here's a full working example of a simple web server: +
+ +{{code "doc/articles/wiki/http-sample.go"}} + +
+The main
function begins with a call to
+http.HandleFunc
, which tells the http
package to
+handle all requests to the web root ("/"
) with
+handler
.
+
+It then calls http.ListenAndServe
, specifying that it should
+listen on port 8080 on any interface (":8080"
). (Don't
+worry about its second parameter, nil
, for now.)
+This function will block until the program is terminated.
+
+ListenAndServe
always returns an error, since it only returns when an
+unexpected error occurs.
+In order to log that error we wrap the function call with log.Fatal
.
+
+The function handler
is of the type http.HandlerFunc
.
+It takes an http.ResponseWriter
and an http.Request
as
+its arguments.
+
+An http.ResponseWriter
value assembles the HTTP server's response; by writing
+to it, we send data to the HTTP client.
+
+An http.Request
is a data structure that represents the client
+HTTP request. r.URL.Path
is the path component
+of the request URL. The trailing [1:]
means
+"create a sub-slice of Path
from the 1st character to the end."
+This drops the leading "/" from the path name.
+
+If you run this program and access the URL: +
+http://localhost:8080/monkeys+
+the program would present a page containing: +
+Hi there, I love monkeys!+ +
net/http
to serve wiki pages
+To use the net/http
package, it must be imported:
+
+import ( + "fmt" + "io/ioutil" + "log" + "net/http" +) ++ +
+Let's create a handler, viewHandler
that will allow users to
+view a wiki page. It will handle URLs prefixed with "/view/".
+
+Again, note the use of _
to ignore the error
+return value from loadPage
. This is done here for simplicity
+and generally considered bad practice. We will attend to this later.
+
+First, this function extracts the page title from r.URL.Path
,
+the path component of the request URL.
+The Path
is re-sliced with [len("/view/"):]
to drop
+the leading "/view/"
component of the request path.
+This is because the path will invariably begin with "/view/"
,
+which is not part of the page's title.
+
+The function then loads the page data, formats the page with a string of simple
+HTML, and writes it to w
, the http.ResponseWriter
.
+
+To use this handler, we rewrite our main
function to
+initialize http
using the viewHandler
to handle
+any requests under the path /view/
.
+
+Click here to view the code we've written so far. +
+ +
+Let's create some page data (as test.txt
), compile our code, and
+try serving a wiki page.
+
+Open test.txt
file in your editor, and save the string "Hello world" (without quotes)
+in it.
+
+$ go build wiki.go +$ ./wiki ++ +
+(If you're using Windows you must type "wiki
" without the
+"./
" to run the program.)
+
+With this web server running, a visit to http://localhost:8080/view/test
+should show a page titled "test" containing the words "Hello world".
+
+A wiki is not a wiki without the ability to edit pages. Let's create two new
+handlers: one named editHandler
to display an 'edit page' form,
+and the other named saveHandler
to save the data entered via the
+form.
+
+First, we add them to main()
:
+
+The function editHandler
loads the page
+(or, if it doesn't exist, create an empty Page
struct),
+and displays an HTML form.
+
+This function will work fine, but all that hard-coded HTML is ugly. +Of course, there is a better way. +
+ +html/template
package
+The html/template
package is part of the Go standard library.
+We can use html/template
to keep the HTML in a separate file,
+allowing us to change the layout of our edit page without modifying the
+underlying Go code.
+
+First, we must add html/template
to the list of imports. We
+also won't be using fmt
anymore, so we have to remove that.
+
+import ( + "html/template" + "io/ioutil" + "net/http" +) ++ +
+Let's create a template file containing the HTML form.
+Open a new file named edit.html
, and add the following lines:
+
+Modify editHandler
to use the template, instead of the hard-coded
+HTML:
+
+The function template.ParseFiles
will read the contents of
+edit.html
and return a *template.Template
.
+
+The method t.Execute
executes the template, writing the
+generated HTML to the http.ResponseWriter
.
+The .Title
and .Body
dotted identifiers refer to
+p.Title
and p.Body
.
+
+Template directives are enclosed in double curly braces.
+The printf "%s" .Body
instruction is a function call
+that outputs .Body
as a string instead of a stream of bytes,
+the same as a call to fmt.Printf
.
+The html/template
package helps guarantee that only safe and
+correct-looking HTML is generated by template actions. For instance, it
+automatically escapes any greater than sign (>
), replacing it
+with >
, to make sure user data does not corrupt the form
+HTML.
+
+Since we're working with templates now, let's create a template for our
+viewHandler
called view.html
:
+
+Modify viewHandler
accordingly:
+
+Notice that we've used almost exactly the same templating code in both +handlers. Let's remove this duplication by moving the templating code +to its own function: +
+ +{{code "doc/articles/wiki/final-template.go" `/^func renderTemplate/` `/^}/`}} + ++And modify the handlers to use that function: +
+ +{{code "doc/articles/wiki/final-template.go" `/^func viewHandler/` `/^}/`}} +{{code "doc/articles/wiki/final-template.go" `/^func editHandler/` `/^}/`}} + +
+If we comment out the registration of our unimplemented save handler in
+main
, we can once again build and test our program.
+Click here to view the code we've written so far.
+
+What if you visit
+/view/APageThatDoesntExist
? You'll see a page containing
+HTML. This is because it ignores the error return value from
+loadPage
and continues to try and fill out the template
+with no data. Instead, if the requested Page doesn't exist, it should
+redirect the client to the edit Page so the content may be created:
+
+The http.Redirect
function adds an HTTP status code of
+http.StatusFound
(302) and a Location
+header to the HTTP response.
+
+The function saveHandler
will handle the submission of forms
+located on the edit pages. After uncommenting the related line in
+main
, let's implement the handler:
+
+The page title (provided in the URL) and the form's only field,
+Body
, are stored in a new Page
.
+The save()
method is then called to write the data to a file,
+and the client is redirected to the /view/
page.
+
+The value returned by FormValue
is of type string
.
+We must convert that value to []byte
before it will fit into
+the Page
struct. We use []byte(body)
to perform
+the conversion.
+
+There are several places in our program where errors are being ignored. This +is bad practice, not least because when an error does occur the program will +have unintended behavior. A better solution is to handle the errors and return +an error message to the user. That way if something does go wrong, the server +will function exactly how we want and the user can be notified. +
+ +
+First, let's handle the errors in renderTemplate
:
+
+The http.Error
function sends a specified HTTP response code
+(in this case "Internal Server Error") and error message.
+Already the decision to put this in a separate function is paying off.
+
+Now let's fix up saveHandler
:
+
+Any errors that occur during p.save()
will be reported
+to the user.
+
+There is an inefficiency in this code: renderTemplate
calls
+ParseFiles
every time a page is rendered.
+A better approach would be to call ParseFiles
once at program
+initialization, parsing all templates into a single *Template
.
+Then we can use the
+ExecuteTemplate
+method to render a specific template.
+
+First we create a global variable named templates
, and initialize
+it with ParseFiles
.
+
+The function template.Must
is a convenience wrapper that panics
+when passed a non-nil error
value, and otherwise returns the
+*Template
unaltered. A panic is appropriate here; if the templates
+can't be loaded the only sensible thing to do is exit the program.
+
+The ParseFiles
function takes any number of string arguments that
+identify our template files, and parses those files into templates that are
+named after the base file name. If we were to add more templates to our
+program, we would add their names to the ParseFiles
call's
+arguments.
+
+We then modify the renderTemplate
function to call the
+templates.ExecuteTemplate
method with the name of the appropriate
+template:
+
+Note that the template name is the template file name, so we must
+append ".html"
to the tmpl
argument.
+
+As you may have observed, this program has a serious security flaw: a user +can supply an arbitrary path to be read/written on the server. To mitigate +this, we can write a function to validate the title with a regular expression. +
+ +
+First, add "regexp"
to the import
list.
+Then we can create a global variable to store our validation
+expression:
+
+The function regexp.MustCompile
will parse and compile the
+regular expression, and return a regexp.Regexp
.
+MustCompile
is distinct from Compile
in that it will
+panic if the expression compilation fails, while Compile
returns
+an error
as a second parameter.
+
+Now, let's write a function that uses the validPath
+expression to validate path and extract the page title:
+
+If the title is valid, it will be returned along with a nil
+error value. If the title is invalid, the function will write a
+"404 Not Found" error to the HTTP connection, and return an error to the
+handler. To create a new error, we have to import the errors
+package.
+
+Let's put a call to getTitle
in each of the handlers:
+
+Catching the error condition in each handler introduces a lot of repeated code. +What if we could wrap each of the handlers in a function that does this +validation and error checking? Go's +function +literals provide a powerful means of abstracting functionality +that can help us here. +
+ ++First, we re-write the function definition of each of the handlers to accept +a title string: +
+ ++func viewHandler(w http.ResponseWriter, r *http.Request, title string) +func editHandler(w http.ResponseWriter, r *http.Request, title string) +func saveHandler(w http.ResponseWriter, r *http.Request, title string) ++ +
+Now let's define a wrapper function that takes a function of the above
+type, and returns a function of type http.HandlerFunc
+(suitable to be passed to the function http.HandleFunc
):
+
+func makeHandler(fn func (http.ResponseWriter, *http.Request, string)) http.HandlerFunc { + return func(w http.ResponseWriter, r *http.Request) { + // Here we will extract the page title from the Request, + // and call the provided handler 'fn' + } +} ++ +
+The returned function is called a closure because it encloses values defined
+outside of it. In this case, the variable fn
(the single argument
+to makeHandler
) is enclosed by the closure. The variable
+fn
will be one of our save, edit, or view handlers.
+
+Now we can take the code from getTitle
and use it here
+(with some minor modifications):
+
+The closure returned by makeHandler
is a function that takes
+an http.ResponseWriter
and http.Request
(in other
+words, an http.HandlerFunc
).
+The closure extracts the title
from the request path, and
+validates it with the validPath
regexp. If the
+title
is invalid, an error will be written to the
+ResponseWriter
using the http.NotFound
function.
+If the title
is valid, the enclosed handler function
+fn
will be called with the ResponseWriter
,
+Request
, and title
as arguments.
+
+Now we can wrap the handler functions with makeHandler
in
+main
, before they are registered with the http
+package:
+
+Finally we remove the calls to getTitle
from the handler functions,
+making them much simpler:
+
+Click here to view the final code listing. +
+ ++Recompile the code, and run the app: +
+ ++$ go build wiki.go +$ ./wiki ++ +
+Visiting http://localhost:8080/view/ANewPage +should present you with the page edit form. You should then be able to +enter some text, click 'Save', and be redirected to the newly created page. +
+ ++Here are some simple tasks you might want to tackle on your own: +
+ +tmpl/
and page data in data/
.
+/view/FrontPage
.[PageName]
to <a href="/view/PageName">PageName</a>
.
+ (hint: you could use regexp.ReplaceAllFunc
to do this)
+ [edit]
+ +[edit]
+ ++There is a suite of programs to build and process Go source code. +Instead of being run directly, programs in the suite are usually invoked +by the go program. +
+ +
+The most common way to run these programs is as a subcommand of the go program,
+for instance as go fmt
. Run like this, the command operates on
+complete packages of Go source code, with the go program invoking the
+underlying binary with arguments appropriate to package-level processing.
+
+The programs can also be run as stand-alone binaries, with unmodified arguments,
+using the go tool
subcommand, such as go tool cgo
.
+For most commands this is mainly useful for debugging.
+Some of the commands, such as pprof
, are accessible only through
+the go tool
subcommand.
+
+Finally the fmt
and godoc
commands are installed
+as regular binaries called gofmt
and godoc
because
+they are so often referenced.
+
+Click on the links for more documentation, invocation methods, and usage details. +
+ +Name | ++ | Synopsis | +
---|---|---|
go | ++ |
+The go program manages Go source code and runs the other
+commands listed here.
+See the command docs for usage
+details.
+ |
+
cgo | ++ | Cgo enables the creation of Go packages that call C code. | +
cover | ++ | Cover is a program for creating and analyzing the coverage profiles
+generated by "go test -coverprofile" . |
+
fix | ++ | Fix finds Go programs that use old features of the language and libraries +and rewrites them to use newer ones. | +
fmt | ++ | Fmt formats Go packages, it is also available as an independent +gofmt command with more general options. | +
godoc | ++ | Godoc extracts and generates documentation for Go packages. | +
vet | ++ | Vet examines Go source code and reports suspicious constructs, such as Printf +calls whose arguments do not align with the format string. | +
+This is an abridged list. See the full command reference +for documentation of the compilers and more. +
diff --git a/_content/doc/codewalk/codewalk.css b/_content/doc/codewalk/codewalk.css new file mode 100644 index 00000000..a0814e4d --- /dev/null +++ b/_content/doc/codewalk/codewalk.css @@ -0,0 +1,234 @@ +/* + Copyright 2010 The Go Authors. All rights reserved. + Use of this source code is governed by a BSD-style + license that can be found in the LICENSE file. +*/ + +#codewalk-main { + text-align: left; + width: 100%; + overflow: auto; +} + +#code-display { + border: 0; + width: 100%; +} + +.setting { + font-size: 8pt; + color: #888888; + padding: 5px; +} + +.hotkey { + text-decoration: underline; +} + +/* Style for Comments (the left-hand column) */ + +#comment-column { + margin: 0pt; + width: 30%; +} + +#comment-column.right { + float: right; +} + +#comment-column.left { + float: left; +} + +#comment-area { + overflow-x: hidden; + overflow-y: auto; +} + +.comment { + cursor: pointer; + font-size: 16px; + border: 2px solid #ba9836; + margin-bottom: 10px; + margin-right: 10px; /* yes, for both .left and .right */ +} + +.comment:last-child { + margin-bottom: 0px; +} + +.right .comment { + margin-left: 10px; +} + +.right .comment.first { +} + +.right .comment.last { +} + +.left .comment.first { +} + +.left .comment.last { +} + +.comment.selected { + border-color: #99b2cb; +} + +.right .comment.selected { + border-left-width: 12px; + margin-left: 0px; +} + +.left .comment.selected { + border-right-width: 12px; + margin-right: 0px; +} + +.comment-link { + display: none; +} + +.comment-title { + font-size: small; + font-weight: bold; + background-color: #fffff0; + padding-right: 10px; + padding-left: 10px; + padding-top: 5px; + padding-bottom: 5px; +} + +.right .comment-title { +} + +.left .comment-title { +} + +.comment.selected .comment-title { + background-color: #f8f8ff; +} + +.comment-text { + overflow: auto; + padding-left: 10px; + padding-right: 10px; + padding-top: 10px; + padding-bottom: 5px; + font-size: small; + line-height: 1.3em; +} + +.comment-text p { + margin-top: 0em; + margin-bottom: 0.5em; +} + +.comment-text p:last-child { + margin-bottom: 0em; +} + +.file-name { + font-size: x-small; + padding-top: 0px; + padding-bottom: 5px; +} + +.hidden-filepaths .file-name { + display: none; +} + +.path-dir { + color: #555; +} + +.path-file { + color: #555; +} + + +/* Style for Code (the right-hand column) */ + +/* Wrapper for the code column to make widths get calculated correctly */ +#code-column { + display: block; + position: relative; + margin: 0pt; + width: 70%; +} + +#code-column.left { + float: left; +} + +#code-column.right { + float: right; +} + +#code-area { + background-color: #f8f8ff; + border: 2px solid #99b2cb; + padding: 5px; +} + +.left #code-area { + margin-right: -1px; +} + +.right #code-area { + margin-left: -1px; +} + +#code-header { + margin-bottom: 5px; +} + +#code { + background-color: white; +} + +code { + font-size: 100%; +} + +.codewalkhighlight { + font-weight: bold; + background-color: #f8f8ff; +} + +#code-display { + margin-top: 0px; + margin-bottom: 0px; +} + +#sizer { + position: absolute; + cursor: col-resize; + left: 0px; + top: 0px; + width: 8px; +} + +/* Style for options (bottom strip) */ + +#code-options { + display: none; +} + +#code-options > span { + padding-right: 20px; +} + +#code-options .selected { + border-bottom: 1px dotted; +} + +#comment-options { + text-align: center; +} + +div#content { + padding-bottom: 0em; +} diff --git a/_content/doc/codewalk/codewalk.js b/_content/doc/codewalk/codewalk.js new file mode 100644 index 00000000..4f59a8fc --- /dev/null +++ b/_content/doc/codewalk/codewalk.js @@ -0,0 +1,305 @@ +// Copyright 2010 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/** + * A class to hold information about the Codewalk Viewer. + * @param {jQuery} context The top element in whose context the viewer should + * operate. It will not touch any elements above this one. + * @constructor + */ + var CodewalkViewer = function(context) { + this.context = context; + + /** + * The div that contains all of the comments and their controls. + */ + this.commentColumn = this.context.find('#comment-column'); + + /** + * The div that contains the comments proper. + */ + this.commentArea = this.context.find('#comment-area'); + + /** + * The div that wraps the iframe with the code, as well as the drop down menu + * listing the different files. + * @type {jQuery} + */ + this.codeColumn = this.context.find('#code-column'); + + /** + * The div that contains the code but excludes the options strip. + * @type {jQuery} + */ + this.codeArea = this.context.find('#code-area'); + + /** + * The iframe that holds the code (from Sourcerer). + * @type {jQuery} + */ + this.codeDisplay = this.context.find('#code-display'); + + /** + * The overlaid div used as a grab handle for sizing the code/comment panes. + * @type {jQuery} + */ + this.sizer = this.context.find('#sizer'); + + /** + * The full-screen overlay that ensures we don't lose track of the mouse + * while dragging. + * @type {jQuery} + */ + this.overlay = this.context.find('#overlay'); + + /** + * The hidden input field that we use to hold the focus so that we can detect + * shortcut keypresses. + * @type {jQuery} + */ + this.shortcutInput = this.context.find('#shortcut-input'); + + /** + * The last comment that was selected. + * @type {jQuery} + */ + this.lastSelected = null; +}; + +/** + * Minimum width of the comments or code pane, in pixels. + * @type {number} + */ +CodewalkViewer.MIN_PANE_WIDTH = 200; + +/** + * Navigate the code iframe to the given url and update the code popout link. + * @param {string} url The target URL. + * @param {Object} opt_window Window dependency injection for testing only. + */ +CodewalkViewer.prototype.navigateToCode = function(url, opt_window) { + if (!opt_window) opt_window = window; + // Each iframe is represented by two distinct objects in the DOM: an iframe + // object and a window object. These do not expose the same capabilities. + // Here we need to get the window representation to get the location member, + // so we access it directly through window[] since jQuery returns the iframe + // representation. + // We replace location rather than set so as not to create a history for code + // navigation. + opt_window['code-display'].location.replace(url); + var k = url.indexOf('&'); + if (k != -1) url = url.slice(0, k); + k = url.indexOf('fileprint='); + if (k != -1) url = url.slice(k+10, url.length); + this.context.find('#code-popout-link').attr('href', url); +}; + +/** + * Selects the first comment from the list and forces a refresh of the code + * view. + */ +CodewalkViewer.prototype.selectFirstComment = function() { + // TODO(rsc): handle case where there are no comments + var firstSourcererLink = this.context.find('.comment:first'); + this.changeSelectedComment(firstSourcererLink); +}; + +/** + * Sets the target on all links nested inside comments to be _blank. + */ +CodewalkViewer.prototype.targetCommentLinksAtBlank = function() { + this.context.find('.comment a[href], #description a[href]').each(function() { + if (!this.target) this.target = '_blank'; + }); +}; + +/** + * Installs event handlers for all the events we care about. + */ +CodewalkViewer.prototype.installEventHandlers = function() { + var self = this; + + this.context.find('.comment') + .click(function(event) { + if (jQuery(event.target).is('a[href]')) return true; + self.changeSelectedComment(jQuery(this)); + return false; + }); + + this.context.find('#code-selector') + .change(function() {self.navigateToCode(jQuery(this).val());}); + + this.context.find('#description-table .quote-feet.setting') + .click(function() {self.toggleDescription(jQuery(this)); return false;}); + + this.sizer + .mousedown(function(ev) {self.startSizerDrag(ev); return false;}); + this.overlay + .mouseup(function(ev) {self.endSizerDrag(ev); return false;}) + .mousemove(function(ev) {self.handleSizerDrag(ev); return false;}); + + this.context.find('#prev-comment') + .click(function() { + self.changeSelectedComment(self.lastSelected.prev()); return false; + }); + + this.context.find('#next-comment') + .click(function() { + self.changeSelectedComment(self.lastSelected.next()); return false; + }); + + // Workaround for Firefox 2 and 3, which steal focus from the main document + // whenever the iframe content is (re)loaded. The input field is not shown, + // but is a way for us to bring focus back to a place where we can detect + // keypresses. + this.context.find('#code-display') + .load(function(ev) {self.shortcutInput.focus();}); + + jQuery(document).keypress(function(ev) { + switch(ev.which) { + case 110: // 'n' + self.changeSelectedComment(self.lastSelected.next()); + return false; + case 112: // 'p' + self.changeSelectedComment(self.lastSelected.prev()); + return false; + default: // ignore + } + }); + + window.onresize = function() {self.updateHeight();}; +}; + +/** + * Starts dragging the pane sizer. + * @param {Object} ev The mousedown event that started us dragging. + */ +CodewalkViewer.prototype.startSizerDrag = function(ev) { + this.initialCodeWidth = this.codeColumn.width(); + this.initialCommentsWidth = this.commentColumn.width(); + this.initialMouseX = ev.pageX; + this.overlay.show(); +}; + +/** + * Handles dragging the pane sizer. + * @param {Object} ev The mousemove event updating dragging position. + */ +CodewalkViewer.prototype.handleSizerDrag = function(ev) { + var delta = ev.pageX - this.initialMouseX; + if (this.codeColumn.is('.right')) delta = -delta; + var proposedCodeWidth = this.initialCodeWidth + delta; + var proposedCommentWidth = this.initialCommentsWidth - delta; + var mw = CodewalkViewer.MIN_PANE_WIDTH; + if (proposedCodeWidth < mw) delta = mw - this.initialCodeWidth; + if (proposedCommentWidth < mw) delta = this.initialCommentsWidth - mw; + proposedCodeWidth = this.initialCodeWidth + delta; + proposedCommentWidth = this.initialCommentsWidth - delta; + // If window is too small, don't even try to resize. + if (proposedCodeWidth < mw || proposedCommentWidth < mw) return; + this.codeColumn.width(proposedCodeWidth); + this.commentColumn.width(proposedCommentWidth); + this.options.codeWidth = parseInt( + this.codeColumn.width() / + (this.codeColumn.width() + this.commentColumn.width()) * 100); + this.context.find('#code-column-width').text(this.options.codeWidth + '%'); +}; + +/** + * Ends dragging the pane sizer. + * @param {Object} ev The mouseup event that caused us to stop dragging. + */ +CodewalkViewer.prototype.endSizerDrag = function(ev) { + this.overlay.hide(); + this.updateHeight(); +}; + +/** + * Toggles the Codewalk description between being shown and hidden. + * @param {jQuery} target The target that was clicked to trigger this function. + */ +CodewalkViewer.prototype.toggleDescription = function(target) { + var description = this.context.find('#description'); + description.toggle(); + target.find('span').text(description.is(':hidden') ? 'show' : 'hide'); + this.updateHeight(); +}; + +/** + * Changes the side of the window on which the code is shown and saves the + * setting in a cookie. + * @param {string?} codeSide The side on which the code should be, either + * 'left' or 'right'. + */ +CodewalkViewer.prototype.changeCodeSide = function(codeSide) { + var commentSide = codeSide == 'left' ? 'right' : 'left'; + this.context.find('#set-code-' + codeSide).addClass('selected'); + this.context.find('#set-code-' + commentSide).removeClass('selected'); + // Remove previous side class and add new one. + this.codeColumn.addClass(codeSide).removeClass(commentSide); + this.commentColumn.addClass(commentSide).removeClass(codeSide); + this.sizer.css(codeSide, 'auto').css(commentSide, 0); + this.options.codeSide = codeSide; +}; + +/** + * Adds selected class to newly selected comment, removes selected style from + * previously selected comment, changes drop down options so that the correct + * file is selected, and updates the code popout link. + * @param {jQuery} target The target that was clicked to trigger this function. + */ +CodewalkViewer.prototype.changeSelectedComment = function(target) { + var currentFile = target.find('.comment-link').attr('href'); + if (!currentFile) return; + + if (!(this.lastSelected && this.lastSelected.get(0) === target.get(0))) { + if (this.lastSelected) this.lastSelected.removeClass('selected'); + target.addClass('selected'); + this.lastSelected = target; + var targetTop = target.position().top; + var parentTop = target.parent().position().top; + if (targetTop + target.height() > parentTop + target.parent().height() || + targetTop < parentTop) { + var delta = targetTop - parentTop; + target.parent().animate( + {'scrollTop': target.parent().scrollTop() + delta}, + Math.max(delta / 2, 200), 'swing'); + } + var fname = currentFile.match(/(?:select=|fileprint=)\/[^&]+/)[0]; + fname = fname.slice(fname.indexOf('=')+2, fname.length); + this.context.find('#code-selector').val(fname); + this.context.find('#prev-comment').toggleClass( + 'disabled', !target.prev().length); + this.context.find('#next-comment').toggleClass( + 'disabled', !target.next().length); + } + + // Force original file even if user hasn't changed comments since they may + // have navigated away from it within the iframe without us knowing. + this.navigateToCode(currentFile); +}; + +/** + * Updates the viewer by changing the height of the comments and code so that + * they fit within the height of the window. The function is typically called + * after the user changes the window size. + */ +CodewalkViewer.prototype.updateHeight = function() { + var windowHeight = jQuery(window).height() - 5 // GOK + var areaHeight = windowHeight - this.codeArea.offset().top + var footerHeight = this.context.find('#footer').outerHeight(true) + this.commentArea.height(areaHeight - footerHeight - this.context.find('#comment-options').outerHeight(true)) + var codeHeight = areaHeight - footerHeight - 15 // GOK + this.codeArea.height(codeHeight) + this.codeDisplay.height(codeHeight - this.codeDisplay.offset().top + this.codeArea.offset().top); + this.sizer.height(codeHeight); +}; + +window.initFuncs.push(function() { + var viewer = new CodewalkViewer(jQuery('#codewalk-main')); + viewer.selectFirstComment(); + viewer.targetCommentLinksAtBlank(); + viewer.installEventHandlers(); + viewer.updateHeight(); +}); diff --git a/_content/doc/codewalk/codewalk.xml b/_content/doc/codewalk/codewalk.xml new file mode 100644 index 00000000..34e6e919 --- /dev/null +++ b/_content/doc/codewalk/codewalk.xml @@ -0,0 +1,124 @@ +/doc/codewalk/
name
+ is loaded from the input file $GOROOT/doc/codewalk/
name.xml
.
+ $GOROOT/doc/codewalk/codewalk.xml
,
+ shown in the main window pane to the left.
+<codewalk>
element.
+ That element's title
attribute gives the title
+ that is used both on the codewalk page and in the codewalk list.
+<step>
element
+ nested inside the main <codewalk>
.
+ The step element's title
attribute gives the step's title,
+ which is shown in a shaded bar above the main step text.
+ The element's src
attribute specifies the source
+ code to show in the main window pane and, optionally, a range of
+ lines to highlight.
+ src
is just a file name.
+src
attribute of the form
+ filename:
address,
+ where address is an address in the syntax used by the text editors sam and acme.
+ /title=/
,
+ which matches the first instance of that regular expression (title=
) in the file.
+/
regexp1/,/
regexp2/
.
+ The highlight begins with the line containing the first match for regexp1
+ and ends with the line containing the first match for regexp2
+ after the end of the match for regexp1.
+ Ignoring the HTML quoting,
+ The line containing the first match for regexp1 will be the first one highlighted,
+ and the line containing the first match for regexp2.
+ /<step/,/step>/
looks for the first instance of
+ <step
in the file, and then starting after that point,
+ looks for the first instance of step>
.
+ (Click on the “Steps” step above to see the highlight in action.)
+ Note that the <
and >
had to be written
+ using XML escapes in order to be valid XML.
+/
regexp/
+ and /
regexp1/,/
regexp2/
+ forms suffice for most highlighting.
+ sam
):
+ Simple addresses | |
# n |
+ The empty string after character n |
n | +Line n |
/ regexp/ |
+ The first following match of the regular expression |
$ |
+ The null string at the end of the file |
Compound addresses | |
a1+ a2 |
+ The address a2 evaluated starting at the right of a1 |
a1- a2 |
+ The address a2 evaluated in the reverse direction starting at the left of a1 |
a1, a2 |
+ From the left of a1 to the right of a2 (default 0,$ ). |
score
type stores the scores of the current and opposing
+ players, in addition to the points accumulated during the current turn.
+action
type is a function that takes a score
+ and returns the resulting score
and whether the current turn is
+ over.
+ player
and opponent
fields
+ in the resulting score
should be swapped, as it is now the other player's
+ turn.
+roll
and stay
each return a pair of
+ values. They also match the action
type signature. These
+ action
functions define the rules of Pig.
+strategy
is a function that takes a score
as input
+ and returns an action
to perform. action
is itself a function.)
+k
is
+ enclosed by this function literal, which matches the strategy
type
+ signature.
+action
to update the
+ score
until one player reaches 100 points. Each
+ action
is selected by calling the strategy
function
+ associated with the current player.
+roundRobin
function simulates a tournament and tallies wins.
+ Each strategy plays each other strategy gamesPerSeries
times.
+ratioString
take a variable number of
+ arguments. These arguments are available as a slice inside the function.
+main
function defines 100 basic strategies, simulates a round
+ robin tournament, and then prints the win/loss record of each strategy.
+ map[string][]string
.
+ Each map key is a prefix (a string
) and its values are
+ lists of suffixes (a slice of strings, []string
).
+ +map[string][]string{ + " ": {"I"}, + " I": {"am"}, + "I am": {"a", "not"}, + "a free": {"man!"}, + "am a": {"free"}, + "am not": {"a"}, + "a number!": {"I"}, + "number! I": {"am"}, + "not a": {"number!"}, +}+ While each prefix consists of multiple words, we + store prefixes in the map as a single
string
.
+ It would seem more natural to store the prefix as a
+ []string
, but we can't do this with a map because the
+ key type of a map must implement equality (and slices do not).
+ []string
and join the strings together with a space
+ to generate the map key:
+ +Prefix Map key + +[]string{"", ""} " " +[]string{"", "I"} " I" +[]string{"I", "am"} "I am" ++
Chain
struct stores
+ this data.
+Chain
struct has two unexported fields (those that
+ do not begin with an upper case character), and so we write a
+ NewChain
constructor function that initializes the
+ chain
map with make
and sets the
+ prefixLen
field.
+ main
) and therefore
+ there is little practical difference between exported and unexported
+ fields. We could just as easily write out the contents of this function
+ when we want to construct a new Chain.
+ But using these unexported fields is good practice; it clearly denotes
+ that only methods of Chain and its constructor function should access
+ those fields. Also, structuring Chain
like this means we
+ could easily move it into its own package at some later date.
+Prefix
type with the concrete type []string
.
+ Defining a named type clearly allows us to be explicit when we are
+ working with a prefix instead of just a []string
.
+ Also, in Go we can define methods on any named type (not just structs),
+ so we can add methods that operate on Prefix
if we need to.
+Prefix
is
+ String
. It returns a string
representation
+ of a Prefix
by joining the slice elements together with
+ spaces. We will use this method to generate keys when working with
+ the chain map.
+Build
method reads text from an io.Reader
+ and parses it into prefixes and suffixes that are stored in the
+ Chain
.
+ io.Reader
is an
+ interface type that is widely used by the standard library and
+ other Go code. Our code uses the
+ fmt.Fscan
function, which
+ reads space-separated values from an io.Reader
.
+ Build
method returns once the Reader
's
+ Read
method returns io.EOF
(end of file)
+ or some other read error occurs.
+Readers
. For efficiency we wrap the provided
+ io.Reader
with
+ bufio.NewReader
to create a
+ new io.Reader
that provides buffering.
+Prefix
slice
+ p
using the Chain
's prefixLen
+ field as its length.
+ We'll use this variable to hold the current prefix and mutate it with
+ each new word we encounter.
+Reader
into a
+ string
variable s
using
+ fmt.Fscan
. Since Fscan
uses space to
+ separate each input value, each call will yield just one word
+ (including punctuation), which is exactly what we need.
+ Fscan
returns an error if it encounters a read error
+ (io.EOF
, for example) or if it can't scan the requested
+ value (in our case, a single string). In either case we just want to
+ stop scanning, so we break
out of the loop.
+s
is a new suffix. We add the new
+ prefix/suffix combination to the chain
map by computing
+ the map key with p.String
and appending the suffix
+ to the slice stored under that key.
+ append
function appends elements to a slice
+ and allocates new storage when necessary. When the provided slice is
+ nil
, append
allocates a new slice.
+ This behavior conveniently ties in with the semantics of our map:
+ retrieving an unset key returns the zero value of the value type and
+ the zero value of []string
is nil
.
+ When our program encounters a new prefix (yielding a nil
+ value in the map) append
will allocate a new slice.
+ append
function and slices
+ in general see the
+ Slices: usage and internals article.
++p == Prefix{"I", "am"} +s == "not"+ the new value for
p
would be
+ +p == Prefix{"am", "not"}+ This operation is also required during text generation so we put + the code to perform this mutation of the slice inside a method on +
Prefix
named Shift
.
+Shift
method uses the built-in copy
+ function to copy the last len(p)-1 elements of p
to
+ the start of the slice, effectively moving the elements
+ one index to the left (if you consider zero as the leftmost index).
+ +p := Prefix{"I", "am"} +copy(p, p[1:]) +// p == Prefix{"am", "am"}+ We then assign the provided
word
to the last index
+ of the slice:
+ +// suffix == "not" +p[len(p)-1] = suffix +// p == Prefix{"am", "not"}+
Generate
method is similar to Build
+ except that instead of reading words from a Reader
+ and storing them in a map, it reads words from the map and
+ appends them to a slice (words
).
+ Generate
uses a conditional for loop to generate
+ up to n
words.
+chain
map at key
+ p.String()
and assign its contents to choices
.
+ len(choices)
is zero we break out of the loop as there
+ are no potential suffixes for that prefix.
+ This test also works if the key isn't present in the map at all:
+ in that case, choices
will be nil
and the
+ length of a nil
slice is zero.
+rand.Intn
function.
+ It returns a random integer up to (but not including) the provided
+ value. Passing in len(choices)
gives us a random index
+ into the full length of the list.
+ next
and append it to the words
slice.
+ Shift
the new suffix onto the prefix just as
+ we did in the Build
method.
+strings.Join
function to join the elements of
+ the words
slice together, separated by spaces.
+flag
package to parse
+ command-line flags.
+ flag.Int
register new flags with the
+ flag
package. The arguments to Int
are the
+ flag name, its default value, and a description. The Int
+ function returns a pointer to an integer that will contain the
+ user-supplied value (or the default value if the flag was omitted on
+ the command-line).
+main
function begins by parsing the command-line
+ flags with flag.Parse
and seeding the rand
+ package's random number generator with the current time.
+ flag.Parse
function will print an informative usage
+ message and terminate the program.
+Chain
we call NewChain
+ with the value of the prefix
flag.
+ Build
with
+ os.Stdin
(which implements io.Reader
) so
+ that it will read its input from standard input.
+Generate
with
+ the value of the words
flag and assigning the result
+ to the variable text
.
+ fmt.Println
to write the text to standard
+ output, followed by a carriage return.
++$ go build markov.go+ And then execute it while piping in some input text: +
+$ echo "a man a plan a canal panama" \ + | ./markov -prefix=1 +a plan a man a plan a canal panama+ Here's a transcript of generating some text using the Go distribution's + README file as source material: +
+$ ./markov -words=10 < $GOROOT/README +This is the source code repository for the Go source +$ ./markov -prefix=1 -words=10 < $GOROOT/README +This is the go directory (the one containing this README). +$ ./markov -prefix=1 -words=10 < $GOROOT/README +This is the variable if you have just untarred a+
Generate
function does a lot of allocations when it
+ builds the words
slice. As an exercise, modify it to
+ take an io.Writer
to which it incrementally writes the
+ generated text with Fprint
.
+ Aside from being more efficient this makes Generate
+ more symmetrical to Build
.
++The Go project welcomes all contributors. +
+ ++This document is a guide to help you through the process +of contributing to the Go project, which is a little different +from that used by other open source projects. +We assume you have a basic understanding of Git and Go. +
+ ++In addition to the information here, the Go community maintains a +CodeReview wiki page. +Feel free to contribute to the wiki as you learn the review process. +
+ +
+Note that the gccgo
front end lives elsewhere;
+see Contributing to gccgo.
+
+The first step is registering as a Go contributor and configuring your environment. +Here is a checklist of the required steps to follow: +
+ +git
+is configured to create commits with that account's e-mail address.
+git-codereview
by running
+go get -u golang.org/x/review/git-codereview
++If you prefer, there is an automated tool that walks through these steps. +Just run: +
+ ++$ go get -u golang.org/x/tools/cmd/go-contrib-init +$ cd /code/to/edit +$ go-contrib-init ++ +
+The rest of this chapter elaborates on these instructions. +If you have completed the steps above (either manually or through the tool), jump to +Before contributing code. +
+ ++A contribution to Go is made through a Google account with a specific +e-mail address. +Make sure to use the same account throughout the process and +for all your subsequent contributions. +You may need to decide whether to use a personal address or a corporate address. +The choice will depend on who +will own the copyright for the code that you will be writing +and submitting. +You might want to discuss this topic with your employer before deciding which +account to use. +
+ ++Google accounts can either be Gmail e-mail accounts, G Suite organization accounts, or +accounts associated with an external e-mail address. +For instance, if you need to use +an existing corporate e-mail that is not managed through G Suite, you can create +an account associated +with your existing +e-mail address. +
+ ++You also need to make sure that your Git tool is configured to create commits +using your chosen e-mail address. +You can either configure Git globally +(as a default for all projects), or locally (for a single specific project). +You can check the current configuration with this command: +
+ ++$ git config --global user.email # check current global config +$ git config user.email # check current local config ++ +
+To change the configured address: +
+ ++$ git config --global user.email name@example.com # change global config +$ git config user.email name@example.com # change local config ++ + +
+Before sending your first change to the Go project +you must have completed one of the following two CLAs. +Which CLA you should sign depends on who owns the copyright to your work. +
+ ++You can check your currently signed agreements and sign new ones at +the Google Developers +Contributor License Agreements website. +If the copyright holder for your contribution has already completed the +agreement in connection with another Google open source project, +it does not need to be completed again. +
+ +
+If the copyright holder for the code you are submitting changes—for example,
+if you start contributing code on behalf of a new company—please send mail
+to the golang-dev
+mailing list.
+This will let us know the situation so we can make sure an appropriate agreement is
+completed and update the AUTHORS
file.
+
+The main Go repository is located at
+go.googlesource.com,
+a Git server hosted by Google.
+Authentication on the web server is made through your Google account, but
+you also need to configure git
on your computer to access it.
+Follow these steps:
+
.gitcookies
file.
+If you are using a Windows computer and running cmd
,
+you should instead follow the instructions in the yellow box to run the command;
+otherwise run the regular script.
++Gerrit is an open-source tool used by Go maintainers to discuss and review +code submissions. +
+ ++To register your account, visit +go-review.googlesource.com/login/ and sign in once using the same Google Account you used above. +
+ +
+Changes to Go must be reviewed before they are accepted, no matter who makes the change.
+A custom git
command called git-codereview
+simplifies sending changes to Gerrit.
+
+Install the git-codereview
command by running,
+
+$ go get -u golang.org/x/review/git-codereview ++ +
+Make sure git-codereview
is installed in your shell path, so that the
+git
command can find it.
+Check that
+
+$ git codereview help ++ +
+prints help text, not an error. If it prints an error, make sure that
+$GOPATH/bin
is in your $PATH
.
+
+On Windows, when using git-bash you must make sure that
+git-codereview.exe
is in your git
exec-path.
+Run git --exec-path
to discover the right location then create a
+symbolic link or just copy the executable from $GOPATH/bin
to this
+directory.
+
+The project welcomes code patches, but to make sure things are well +coordinated you should discuss any significant change before starting +the work. +It's recommended that you signal your intention to contribute in the +issue tracker, either by filing +a new issue or by claiming +an existing one. +
+ ++The Go project consists of the main +go repository, which contains the +source code for the Go language, as well as many golang.org/x/... repostories. +These contain the various tools and infrastructure that support Go. For +example, golang.org/x/pkgsite +is for pkg.go.dev, +golang.org/x/playground +is for the Go playground, and +golang.org/x/tools contains +a variety of Go tools, including the Go language server, +gopls. You can see a +list of all the golang.org/x/... repositories on +go.googlesource.com. +
+ ++Whether you already know what contribution to make, or you are searching for +an idea, the issue tracker is +always the first place to go. +Issues are triaged to categorize them and manage the workflow. +
+ ++The majority of the golang.org/x/... repos also use the main Go +issue tracker. However, a few of these repositories manage their issues +separately, so please be sure to check the right tracker for the repository to +which you would like to contribute. +
+ ++Most issues will be marked with one of the following workflow labels: +
+ ++You can use GitHub's search functionality to find issues to help out with. Examples: +
+ +is:issue is:open label:NeedsInvestigation
+ is:issue is:open label:NeedsFix
+ is:issue is:open label:NeedsFix "golang.org/cl"
+ is:issue is:open label:NeedsFix NOT "golang.org/cl"
+ +Excluding very trivial changes, all contributions should be connected +to an existing issue. +Feel free to open one and discuss your plans. +This process gives everyone a chance to validate the design, +helps prevent duplication of effort, +and ensures that the idea fits inside the goals for the language and tools. +It also checks that the design is sound before code is written; +the code review tool is not the place for high-level discussions. +
+ +
+When planning work, please note that the Go project follows a six-month development cycle
+for the main Go repository. The latter half of each cycle is a three-month
+feature freeze during which only bug fixes and documentation updates are
+accepted. New contributions can be sent during a feature freeze, but they will
+not be merged until the freeze is over. The freeze applies to the entire main
+repository as well as to the code in golang.org/x/... repositories that is
+needed to build the binaries included in the release. See the lists of packages
+vendored into
+the standard library
+and the go
command.
+
+Significant changes to the language, libraries, or tools must go +through the +change proposal process +before they can be accepted. +
+ ++Sensitive security-related issues (only!) should be reported to security@golang.org. +
+ ++First-time contributors that are already familiar with the +GitHub flow +are encouraged to use the same process for Go contributions. +Even though Go +maintainers use Gerrit for code review, a bot called Gopherbot has been created to sync +GitHub pull requests to Gerrit. +
+ ++Open a pull request as you normally would. +Gopherbot will create a corresponding Gerrit change and post a link to +it on your GitHub pull request; updates to the pull request will also +get reflected in the Gerrit change. +When somebody comments on the change, their comment will be also +posted in your pull request, so you will get a notification. +
+ ++Some things to keep in mind: +
+ ++It is not possible to fully sync Gerrit and GitHub, at least at the moment, +so we recommend learning Gerrit. +It's different but powerful and familiarity with it will help you understand +the flow. +
+ ++This is an overview of the overall process: +
+ +go.googlesource.com
and
+make sure it's stable by compiling and testing it once.
+
+If you're making a change to the +main Go repository:
+ ++$ git clone https://go.googlesource.com/go +$ cd go/src +$ ./all.bash # compile and test ++ +
+If you're making a change to one of the golang.org/x/... repositories +(golang.org/x/tools, +in this example): +
+ ++$ git clone https://go.googlesource.com/tools +$ cd tools +$ go test ./... # compile and test ++
git
codereview
change
; that
+will create or amend a single commit in the branch.
++$ git checkout -b mybranch +$ [edit files...] +$ git add [files...] +$ git codereview change # create commit in the branch +$ [edit again...] +$ git add [files...] +$ git codereview change # amend the existing commit with new changes +$ [etc.] ++
all.bash
.
+
+In the main Go repository:
++$ ./all.bash # recompile and test ++ +
In a golang.org/x/... repository:
++$ go test ./... # recompile and test ++
git
+codereview
mail
(which doesn't use e-mail, despite the name).
++$ git codereview mail # send changes to Gerrit ++
+$ [edit files...] +$ git add [files...] +$ git codereview change # update same commit +$ git codereview mail # send to Gerrit again ++
+The rest of this section describes these steps in more detail. +
+ + +
+In addition to a recent Go installation, you need to have a local copy of the source
+checked out from the correct repository.
+You can check out the Go source repo onto your local file system anywhere
+you want as long as it's outside your GOPATH
.
+Clone from go.googlesource.com
(not GitHub):
+
Main Go repository:
++$ git clone https://go.googlesource.com/go +$ cd go ++ +
golang.org/x/... repository
+(golang.org/x/tools in this example): ++$ git clone https://go.googlesource.com/tools +$ cd tools ++ +
+Each Go change must be made in a separate branch, created from the master branch.
+You can use
+the normal git
commands to create a branch and add changes to the
+staging area:
+
+$ git checkout -b mybranch +$ [edit files...] +$ git add [files...] ++ +
+To commit changes, instead of git commit
, use git codereview change
.
+
+$ git codereview change +(open $EDITOR) ++ +
+You can edit the commit description in your favorite editor as usual.
+The git
codereview
change
command
+will automatically add a unique Change-Id line near the bottom.
+That line is used by Gerrit to match successive uploads of the same change.
+Do not edit or delete it.
+A Change-Id looks like this:
+
+Change-Id: I2fbdbffb3aab626c4b6f56348861b7909e3e8990 ++ +
+The tool also checks that you've
+run go
fmt
over the source code, and that
+the commit message follows the suggested format.
+
+If you need to edit the files again, you can stage the new changes and
+re-run git
codereview
change
: each subsequent
+run will amend the existing commit while preserving the Change-Id.
+
+Make sure that you always keep a single commit in each branch.
+If you add more
+commits by mistake, you can use git
rebase
to
+squash them together
+into a single one.
+
+You've written and tested your code, but +before sending code out for review, run all the tests for the whole +tree to make sure the changes don't break other packages or programs. +
+ +This can be done by running all.bash
:
+$ cd go/src +$ ./all.bash ++ +
+(To build under Windows use all.bat
)
+
+After running for a while and printing a lot of testing output, the command should finish +by printing, +
+ ++ALL TESTS PASSED ++ +
+You can use make.bash
instead of all.bash
+to just build the compiler and the standard library without running the test suite.
+Once the go
tool is built, it will be installed as bin/go
+under the directory in which you cloned the Go repository, and you can
+run it directly from there.
+See also
+the section on how to test your changes quickly.
+
+Run the tests for the entire repository +(golang.org/x/tools, +in this example): +
+ ++$ cd tools +$ go test ./... ++ +
+If you're concerned about the build status, +you can check the Build Dashboard. +Test failures may also be caught by the TryBots in code review. +
+ ++Some repositories, like +golang.org/x/vscode-go will +have different testing infrastructures, so always check the documentation +for the repository in which you are working. The README file in the root of the +repository will usually have this information. +
+ +
+Once the change is ready and tested over the whole tree, send it for review.
+This is done with the mail
sub-command which, despite its name, doesn't
+directly mail anything; it just sends the change to Gerrit:
+
+$ git codereview mail ++ +
+Gerrit assigns your change a number and URL, which git
codereview
mail
will print, something like:
+
+remote: New Changes: +remote: https://go-review.googlesource.com/99999 math: improved Sin, Cos and Tan precision for very large arguments ++ +
+If you get an error instead, check the +Troubleshooting mail errors section. +
+ ++If your change relates to an open GitHub issue and you have followed the +suggested commit message format, the issue will be updated in a few minutes by a bot, +linking your Gerrit change to it in the comments. +
+ + ++Go maintainers will review your code on Gerrit, and you will get notifications via e-mail. +You can see the review on Gerrit and comment on them there. +You can also reply +using e-mail +if you prefer. +
+ +
+If you need to revise your change after the review, edit the files in
+the same branch you previously created, add them to the Git staging
+area, and then amend the commit with
+git
codereview
change
:
+
+$ git codereview change # amend current commit +(open $EDITOR) +$ git codereview mail # send new changes to Gerrit ++ +
+If you don't need to change the commit description, just save and exit from the editor. +Remember not to touch the special Change-Id line. +
+ +
+Again, make sure that you always keep a single commit in each branch.
+If you add more
+commits by mistake, you can use git rebase
to
+squash them together
+into a single one.
+
+Commit messages in Go follow a specific set of conventions, +which we discuss in this section. +
+ ++Here is an example of a good one: +
+ ++math: improve Sin, Cos and Tan precision for very large arguments + +The existing implementation has poor numerical properties for +large arguments, so use the McGillicutty algorithm to improve +accuracy above 1e10. + +The algorithm is described at https://wikipedia.org/wiki/McGillicutty_Algorithm + +Fixes #159 ++ +
+The first line of the change description is conventionally a short one-line +summary of the change, prefixed by the primary affected package. +
+ ++A rule of thumb is that it should be written so to complete the sentence +"This change modifies Go to _____." +That means it does not start with a capital letter, is not a complete sentence, +and actually summarizes the result of the change. +
+ ++Follow the first line by a blank line. +
+ ++The rest of the description elaborates and should provide context for the +change and explain what it does. +Write in complete sentences with correct punctuation, just like +for your comments in Go. +Don't use HTML, Markdown, or any other markup language. +
+ ++Add any relevant information, such as benchmark data if the change +affects performance. +The benchstat +tool is conventionally used to format +benchmark data for change descriptions. +
+ ++The special notation "Fixes #12345" associates the change with issue 12345 in the +Go issue tracker. +When this change is eventually applied, the issue +tracker will automatically mark the issue as fixed. +
+ ++If the change is a partial step towards the resolution of the issue, +write "Updates #12345" instead. +This will leave a comment in the issue linking back to the change in +Gerrit, but it will not close the issue when the change is applied. +
+ ++If you are sending a change against a golang.org/x/... repository, you must use +the fully-qualified syntax supported by GitHub to make sure the change is +linked to the issue in the main repository, not the x/ repository. +Most issues are tracked in the main repository's issue tracker. +The correct form is "Fixes golang/go#159". +
+ + ++This section explains the review process in detail and how to approach +reviews after a change has been mailed. +
+ + ++When a change is sent to Gerrit, it is usually triaged within a few days. +A maintainer will have a look and provide some initial review that for first-time +contributors usually focuses on basic cosmetics and common mistakes. +These include things like: +
+ +R=go1.12
,
+which means that it will be reviewed later when the tree opens for a new
+development window.
+You can add R=go1.XX
as a comment yourself
+if you know that it's not the correct time frame for the change.
++After an initial reading of your change, maintainers will trigger trybots, +a cluster of servers that will run the full test suite on several different +architectures. +Most trybots complete in a few minutes, at which point a link will +be posted in Gerrit where you can see the results. +
+ ++If the trybot run fails, follow the link and check the full logs of the +platforms on which the tests failed. +Try to understand what broke, update your patch to fix it, and upload again. +Maintainers will trigger a new trybot run to see +if the problem was fixed. +
+ ++Sometimes, the tree can be broken on some platforms for a few hours; if +the failure reported by the trybot doesn't seem related to your patch, go to the +Build Dashboard and check if the same +failure appears in other recent commits on the same platform. +In this case, +feel free to write a comment in Gerrit to mention that the failure is +unrelated to your change, to help maintainers understand the situation. +
+ ++The Go community values very thorough reviews. +Think of each review comment like a ticket: you are expected to somehow "close" it +by acting on it, either by implementing the suggestion or convincing the +reviewer otherwise. +
+ ++After you update the change, go through the review comments and make sure +to reply to every one. +You can click the "Done" button to reply +indicating that you've implemented the reviewer's suggestion; otherwise, +click on "Reply" and explain why you have not, or what you have done instead. +
+ ++It is perfectly normal for changes to go through several round of reviews, +with one or more reviewers making new comments every time +and then waiting for an updated change before reviewing again. +This cycle happens even for experienced contributors, so +don't be discouraged by it. +
+ ++As they near a decision, reviewers will make a "vote" on your change. +The Gerrit voting system involves an integer in the range -2 to +2: +
+ ++At least two maintainers must approve of the change, and at least one +of those maintainers must +2 the change. +The second maintainer may cast a vote of Trust+1, meaning that the +change looks basically OK, but that the maintainer hasn't done the +detailed review required for a +2 vote. +
+ ++After the code has been +2'ed and Trust+1'ed, an approver will +apply it to the master branch using the Gerrit user interface. +This is called "submitting the change". +
+ ++The two steps (approving and submitting) are separate because in some cases maintainers +may want to approve it but not to submit it right away (for instance, +the tree could be temporarily frozen). +
+ ++Submitting a change checks it into the repository. +The change description will include a link to the code review, +which will be updated with a link to the change +in the repository. +Since the method used to integrate the changes is Git's "Cherry Pick", +the commit hashes in the repository will be changed by +the submit operation. +
+ ++If your change has been approved for a few days without being +submitted, feel free to write a comment in Gerrit requesting +submission. +
+ + ++In addition to the information here, the Go community maintains a CodeReview wiki page. +Feel free to contribute to this page as you learn more about the review process. +
+ + + ++This section collects a number of other comments that are +outside the issue/edit/code review/submit process itself. +
+ + +
+Files in the Go repository don't list author names, both to avoid clutter
+and to avoid having to keep the lists up to date.
+Instead, your name will appear in the
+change log and in the CONTRIBUTORS
file and perhaps the AUTHORS
file.
+These files are automatically generated from the commit logs periodically.
+The AUTHORS
file defines who “The Go
+Authors”—the copyright holders—are.
+
+New files that you contribute should use the standard copyright header: +
+ ++// Copyright 2021 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. ++ +
+(Use the current year if you're reading this in 2022 or beyond.) +Files in the repository are copyrighted the year they are added. +Do not update the copyright year on files that you change. +
+ + + + +
+The most common way that the git
codereview
mail
+command fails is because the e-mail address in the commit does not match the one
+that you used during the registration process.
+
+
+If you see something like...
+
+remote: Processing changes: refs: 1, done +remote: +remote: ERROR: In commit ab13517fa29487dcf8b0d48916c51639426c5ee9 +remote: ERROR: author email address XXXXXXXXXXXXXXXXXXX +remote: ERROR: does not match your user account. ++ +
+you need to configure Git for this repository to use the +e-mail address that you registered with. +To change the e-mail address to ensure this doesn't happen again, run: +
+ ++$ git config user.email email@address.com ++ +
+Then change the commit to use this alternative e-mail address with this command: +
+ ++$ git commit --amend --author="Author Name <email@address.com>" ++ +
+Then retry by running: +
+ ++$ git codereview mail ++ + +
+Running all.bash
for every single change to the code tree
+is burdensome.
+Even though it is strongly suggested to run it before
+sending a change, during the normal development cycle you may want
+to compile and test only the package you are developing.
+
make.bash
instead of all.bash
+to only rebuild the Go tool chain without running the whole test suite.
+Or you
+can run run.bash
to only run the whole test suite without rebuilding
+the tool chain.
+You can think of all.bash
as make.bash
+followed by run.bash
.
+$GODIR
.
+The go
tool built by $GODIR/src/make.bash
will be installed
+in $GODIR/bin/go
and you
+can invoke it to test your code.
+For instance, if you
+have modified the compiler and you want to test how it affects the
+test suite of your own project, just run go
test
+using it:
+
++$ cd <MYPROJECTDIR> +$ $GODIR/bin/go test ++
+$ cd $GODIR/src/crypto/sha1 +$ [make changes...] +$ $GODIR/bin/go test . ++
compile
tool (which is the internal binary invoked
+by go
build
to compile each single package).
+After that, you will want to test it by compiling or running something.
+
++$ cd $GODIR/src +$ [make changes...] +$ $GODIR/bin/go install cmd/compile +$ $GODIR/bin/go build [something...] # test the new compiler +$ $GODIR/bin/go run [something...] # test the new compiler +$ $GODIR/bin/go test [something...] # test the new compiler ++ +The same applies to other internal tools of the Go tool chain, +such as
asm
, cover
, link
, and so on.
+Just recompile and install the tool using go
+install
cmd/<TOOL>
and then use
+the built Go binary to test it.
+$GODIR/test
that contains
+several black-box and regression tests.
+The test suite is run
+by all.bash
but you can also run it manually:
+
++$ cd $GODIR/test +$ $GODIR/bin/go run run.go ++
+Unless explicitly told otherwise, such as in the discussion leading +up to sending in the change, it's better not to specify a reviewer. +All changes are automatically CC'ed to the +golang-codereviews@googlegroups.com +mailing list. +If this is your first ever change, there may be a moderation +delay before it appears on the mailing list, to prevent spam. +
+ +
+You can specify a reviewer or CC interested parties
+using the -r
or -cc
options.
+Both accept a comma-separated list of e-mail addresses:
+
+$ git codereview mail -r joe@golang.org -cc mabel@example.com,math-nuts@swtch.com ++ + +
+While you were working, others might have submitted changes to the repository. +To update your local branch, run +
+ ++$ git codereview sync ++ +
+(Under the covers this runs
+git
pull
-r
.)
+
+As part of the review process reviewers can propose changes directly (in the +GitHub workflow this would be someone else attaching commits to a pull request). + +You can import these changes proposed by someone else into your local Git repository. +On the Gerrit review page, click the "Download ▼" link in the upper right +corner, copy the "Checkout" command and run it from your local Git repo. +It will look something like this: +
+ ++$ git fetch https://go.googlesource.com/review refs/changes/21/13245/1 && git checkout FETCH_HEAD ++ +
+To revert, change back to the branch you were working in. +
+ + +
+The git-codereview
command can be run directly from the shell
+by typing, for instance,
+
+$ git codereview sync ++ +
+but it is more convenient to set up aliases for git-codereview
's own
+subcommands, so that the above becomes,
+
+$ git sync ++ +
+The git-codereview
subcommands have been chosen to be distinct from
+Git's own, so it's safe to define these aliases.
+To install them, copy this text into your
+Git configuration file (usually .gitconfig
in your home directory):
+
+[alias] + change = codereview change + gofmt = codereview gofmt + mail = codereview mail + pending = codereview pending + submit = codereview submit + sync = codereview sync ++ + +
+Advanced users may want to stack up related commits in a single branch. +Gerrit allows for changes to be dependent on each other, forming such a dependency chain. +Each change will need to be approved and submitted separately but the dependency +will be visible to reviewers. +
+ ++To send out a group of dependent changes, keep each change as a different commit under +the same branch, and then run: +
+ ++$ git codereview mail HEAD ++ +
+Make sure to explicitly specify HEAD
, which is usually not required when sending
+single changes. More details can be found in the git-codereview documentation.
+
+The following instructions apply to the standard toolchain
+(the gc
Go compiler and tools).
+Gccgo has native gdb support.
+
+Note that
+Delve is a better
+alternative to GDB when debugging Go programs built with the standard
+toolchain. It understands the Go runtime, data structures, and
+expressions better than GDB. Delve currently supports Linux, OSX,
+and Windows on amd64
.
+For the most up-to-date list of supported platforms, please see
+
+ the Delve documentation.
+
+GDB does not understand Go programs well. +The stack management, threading, and runtime contain aspects that differ +enough from the execution model GDB expects that they can confuse +the debugger and cause incorrect results even when the program is +compiled with gccgo. +As a consequence, although GDB can be useful in some situations (e.g., +debugging Cgo code, or debugging the runtime itself), it is not +a reliable debugger for Go programs, particularly heavily concurrent +ones. Moreover, it is not a priority for the Go project to address +these issues, which are difficult. +
+ ++In short, the instructions below should be taken only as a guide to how +to use GDB when it works, not as a guarantee of success. + +Besides this overview you might want to consult the +GDB manual. +
+ ++
+ +
+When you compile and link your Go programs with the gc
toolchain
+on Linux, macOS, FreeBSD or NetBSD, the resulting binaries contain DWARFv4
+debugging information that recent versions (≥7.5) of the GDB debugger can
+use to inspect a live process or a core dump.
+
+Pass the '-w'
flag to the linker to omit the debug information
+(for example, go
build
-ldflags=-w
prog.go
).
+
+The code generated by the gc
compiler includes inlining of
+function invocations and registerization of variables. These optimizations
+can sometimes make debugging with gdb
harder.
+If you find that you need to disable these optimizations,
+build your program using go
build
-gcflags=all="-N -l"
.
+
+If you want to use gdb to inspect a core dump, you can trigger a dump
+on a program crash, on systems that permit it, by setting
+GOTRACEBACK=crash
in the environment (see the
+ runtime package
+documentation for more info).
+
(gdb) list +(gdb) list line +(gdb) list file.go:line +(gdb) break line +(gdb) break file.go:line +(gdb) disas+
(gdb) bt +(gdb) frame n+
(gdb) info locals +(gdb) info args +(gdb) p variable +(gdb) whatis variable+
(gdb) info variables regexp+
+A recent extension mechanism to GDB allows it to load extension scripts for a +given binary. The toolchain uses this to extend GDB with a handful of +commands to inspect internals of the runtime code (such as goroutines) and to +pretty print the built-in map, slice and channel types. +
+ +(gdb) p var+
(gdb) p $len(var)+
(gdb) p $dtype(var) +(gdb) iface var+
Known issue: GDB can’t automatically find the dynamic +type of an interface value if its long name differs from its short name +(annoying when printing stacktraces, the pretty printer falls back to printing +the short type name and a pointer).
+(gdb) info goroutines +(gdb) goroutine n cmd +(gdb) help goroutine+For example: +
(gdb) goroutine 12 bt+You can inspect all goroutines by passing
all
instead of a specific goroutine's ID.
+For example:
+(gdb) goroutine all bt+
+If you'd like to see how this works, or want to extend it, take a look at src/runtime/runtime-gdb.py in
+the Go source distribution. It depends on some special magic types
+(hash<T,U>
) and variables (runtime.m
and
+runtime.g
) that the linker
+(src/cmd/link/internal/ld/dwarf.go) ensures are described in
+the DWARF code.
+
+If you're interested in what the debugging information looks like, run
+objdump
-W
a.out
and browse through the .debug_*
+sections.
+
"fmt.Print"
as an unstructured literal with a "."
+that needs to be quoted. It objects even more strongly to method names of
+the form pkg.(*MyType).Meth
.
+go
+build -ldflags=-compressdwarf=false
.
+(For convenience you can put the -ldflags
option in
+the GOFLAGS
+environment variable so that you don't have to specify it each time.)
+
+In this tutorial we will inspect the binary of the
+regexp package's unit tests. To build the binary,
+change to $GOROOT/src/regexp
and run go
test
-c
.
+This should produce an executable file named regexp.test
.
+
+Launch GDB, debugging regexp.test
:
+
+$ gdb regexp.test +GNU gdb (GDB) 7.2-gg8 +Copyright (C) 2010 Free Software Foundation, Inc. +License GPLv 3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> +Type "show copying" and "show warranty" for licensing/warranty details. +This GDB was configured as "x86_64-linux". + +Reading symbols from /home/user/go/src/regexp/regexp.test... +done. +Loading Go Runtime support. +(gdb) ++ +
+The message "Loading Go Runtime support" means that GDB loaded the
+extension from $GOROOT/src/runtime/runtime-gdb.py
.
+
+To help GDB find the Go runtime sources and the accompanying support script,
+pass your $GOROOT
with the '-d'
flag:
+
+$ gdb regexp.test -d $GOROOT ++ +
+If for some reason GDB still can't find that directory or that script, you can load
+it by hand by telling gdb (assuming you have the go sources in
+~/go/
):
+
+(gdb) source ~/go/src/runtime/runtime-gdb.py +Loading Go Runtime support. ++ +
+Use the "l"
or "list"
command to inspect source code.
+
+(gdb) l ++ +
+List a specific part of the source parameterizing "list"
with a
+function name (it must be qualified with its package name).
+
+(gdb) l main.main ++ +
+List a specific file and line number: +
+ ++(gdb) l regexp.go:1 +(gdb) # Hit enter to repeat last command. Here, this lists next 10 lines. ++ + +
+Variable and function names must be qualified with the name of the packages
+they belong to. The Compile
function from the regexp
+package is known to GDB as 'regexp.Compile'
.
+
+Methods must be qualified with the name of their receiver types. For example,
+the *Regexp
type’s String
method is known as
+'regexp.(*Regexp).String'
.
+
+Variables that shadow other variables are magically suffixed with a number in the debug info. +Variables referenced by closures will appear as pointers magically prefixed with '&'. +
+ +
+Set a breakpoint at the TestFind
function:
+
+(gdb) b 'regexp.TestFind' +Breakpoint 1 at 0x424908: file /home/user/go/src/regexp/find_test.go, line 148. ++ +
+Run the program: +
+ ++(gdb) run +Starting program: /home/user/go/src/regexp/regexp.test + +Breakpoint 1, regexp.TestFind (t=0xf8404a89c0) at /home/user/go/src/regexp/find_test.go:148 +148 func TestFind(t *testing.T) { ++ +
+Execution has paused at the breakpoint. +See which goroutines are running, and what they're doing: +
+ ++(gdb) info goroutines + 1 waiting runtime.gosched +* 13 running runtime.goexit ++ +
+the one marked with the *
is the current goroutine.
+
+Look at the stack trace for where we’ve paused the program: +
+ ++(gdb) bt # backtrace +#0 regexp.TestFind (t=0xf8404a89c0) at /home/user/go/src/regexp/find_test.go:148 +#1 0x000000000042f60b in testing.tRunner (t=0xf8404a89c0, test=0x573720) at /home/user/go/src/testing/testing.go:156 +#2 0x000000000040df64 in runtime.initdone () at /home/user/go/src/runtime/proc.c:242 +#3 0x000000f8404a89c0 in ?? () +#4 0x0000000000573720 in ?? () +#5 0x0000000000000000 in ?? () ++ +
+The other goroutine, number 1, is stuck in runtime.gosched
, blocked on a channel receive:
+
+(gdb) goroutine 1 bt +#0 0x000000000040facb in runtime.gosched () at /home/user/go/src/runtime/proc.c:873 +#1 0x00000000004031c9 in runtime.chanrecv (c=void, ep=void, selected=void, received=void) + at /home/user/go/src/runtime/chan.c:342 +#2 0x0000000000403299 in runtime.chanrecv1 (t=void, c=void) at/home/user/go/src/runtime/chan.c:423 +#3 0x000000000043075b in testing.RunTests (matchString={void (struct string, struct string, bool *, error *)} + 0x7ffff7f9ef60, tests= []testing.InternalTest = {...}) at /home/user/go/src/testing/testing.go:201 +#4 0x00000000004302b1 in testing.Main (matchString={void (struct string, struct string, bool *, error *)} + 0x7ffff7f9ef80, tests= []testing.InternalTest = {...}, benchmarks= []testing.InternalBenchmark = {...}) +at /home/user/go/src/testing/testing.go:168 +#5 0x0000000000400dc1 in main.main () at /home/user/go/src/regexp/_testmain.go:98 +#6 0x00000000004022e7 in runtime.mainstart () at /home/user/go/src/runtime/amd64/asm.s:78 +#7 0x000000000040ea6f in runtime.initdone () at /home/user/go/src/runtime/proc.c:243 +#8 0x0000000000000000 in ?? () ++ +
+The stack frame shows we’re currently executing the regexp.TestFind
function, as expected.
+
+(gdb) info frame +Stack level 0, frame at 0x7ffff7f9ff88: + rip = 0x425530 in regexp.TestFind (/home/user/go/src/regexp/find_test.go:148); + saved rip 0x430233 + called by frame at 0x7ffff7f9ffa8 + source language minimal. + Arglist at 0x7ffff7f9ff78, args: t=0xf840688b60 + Locals at 0x7ffff7f9ff78, Previous frame's sp is 0x7ffff7f9ff88 + Saved registers: + rip at 0x7ffff7f9ff80 ++ +
+The command info
locals
lists all variables local to the function and their values, but is a bit
+dangerous to use, since it will also try to print uninitialized variables. Uninitialized slices may cause gdb to try
+to print arbitrary large arrays.
+
+The function’s arguments: +
+ ++(gdb) info args +t = 0xf840688b60 ++ +
+When printing the argument, notice that it’s a pointer to a
+Regexp
value. Note that GDB has incorrectly put the *
+on the right-hand side of the type name and made up a 'struct' keyword, in traditional C style.
+
+(gdb) p re +(gdb) p t +$1 = (struct testing.T *) 0xf840688b60 +(gdb) p t +$1 = (struct testing.T *) 0xf840688b60 +(gdb) p *t +$2 = {errors = "", failed = false, ch = 0xf8406f5690} +(gdb) p *t->ch +$3 = struct hchan<*testing.T> ++ +
+That struct
hchan<*testing.T>
is the
+runtime-internal representation of a channel. It is currently empty,
+or gdb would have pretty-printed its contents.
+
+Stepping forward: +
+ ++(gdb) n # execute next line +149 for _, test := range findTests { +(gdb) # enter is repeat +150 re := MustCompile(test.pat) +(gdb) p test.pat +$4 = "" +(gdb) p re +$5 = (struct regexp.Regexp *) 0xf84068d070 +(gdb) p *re +$6 = {expr = "", prog = 0xf840688b80, prefix = "", prefixBytes = []uint8, prefixComplete = true, + prefixRune = 0, cond = 0 '\000', numSubexp = 0, longest = false, mu = {state = 0, sema = 0}, + machine = []*regexp.machine} +(gdb) p *re->prog +$7 = {Inst = []regexp/syntax.Inst = {{Op = 5 '\005', Out = 0, Arg = 0, Rune = []int}, {Op = + 6 '\006', Out = 2, Arg = 0, Rune = []int}, {Op = 4 '\004', Out = 0, Arg = 0, Rune = []int}}, + Start = 1, NumCap = 2} ++ + +
+We can step into the String
function call with "s"
:
+
+(gdb) s +regexp.(*Regexp).String (re=0xf84068d070, noname=void) at /home/user/go/src/regexp/regexp.go:97 +97 func (re *Regexp) String() string { ++ +
+Get a stack trace to see where we are: +
+ ++(gdb) bt +#0 regexp.(*Regexp).String (re=0xf84068d070, noname=void) + at /home/user/go/src/regexp/regexp.go:97 +#1 0x0000000000425615 in regexp.TestFind (t=0xf840688b60) + at /home/user/go/src/regexp/find_test.go:151 +#2 0x0000000000430233 in testing.tRunner (t=0xf840688b60, test=0x5747b8) + at /home/user/go/src/testing/testing.go:156 +#3 0x000000000040ea6f in runtime.initdone () at /home/user/go/src/runtime/proc.c:243 +.... ++ +
+Look at the source code: +
+ ++(gdb) l +92 mu sync.Mutex +93 machine []*machine +94 } +95 +96 // String returns the source text used to compile the regular expression. +97 func (re *Regexp) String() string { +98 return re.expr +99 } +100 +101 // Compile parses a regular expression and returns, if successful, ++ +
+GDB's pretty printing mechanism is triggered by regexp matches on type names. An example for slices: +
+ ++(gdb) p utf +$22 = []uint8 = {0 '\000', 0 '\000', 0 '\000', 0 '\000'} ++ +
+Since slices, arrays and strings are not C pointers, GDB can't interpret the subscripting operation for you, but +you can look inside the runtime representation to do that (tab completion helps here): +
++ +(gdb) p slc +$11 = []int = {0, 0} +(gdb) p slc-><TAB> +array slc len +(gdb) p slc->array +$12 = (int *) 0xf84057af00 +(gdb) p slc->array[1] +$13 = 0+ + + +
+The extension functions $len and $cap work on strings, arrays and slices: +
+ ++(gdb) p $len(utf) +$23 = 4 +(gdb) p $cap(utf) +$24 = 4 ++ +
+Channels and maps are 'reference' types, which gdb shows as pointers to C++-like types hash<int,string>*
. Dereferencing will trigger prettyprinting
+
+Interfaces are represented in the runtime as a pointer to a type descriptor and a pointer to a value. The Go GDB runtime extension decodes this and automatically triggers pretty printing for the runtime type. The extension function $dtype
decodes the dynamic type for you (examples are taken from a breakpoint at regexp.go
line 293.)
+
+(gdb) p i +$4 = {str = "cbb"} +(gdb) whatis i +type = regexp.input +(gdb) p $dtype(i) +$26 = (struct regexp.inputBytes *) 0xf8400b4930 +(gdb) iface i +regexp.input: struct regexp.inputBytes * +diff --git a/_content/doc/diagnostics.html b/_content/doc/diagnostics.html new file mode 100644 index 00000000..438cdce4 --- /dev/null +++ b/_content/doc/diagnostics.html @@ -0,0 +1,472 @@ + + + + +
+The Go ecosystem provides a large suite of APIs and tools to +diagnose logic and performance problems in Go programs. This page +summarizes the available tools and helps Go users pick the right one +for their specific problem. +
+ ++Diagnostics solutions can be categorized into the following groups: +
+ ++Note: Some diagnostics tools may interfere with each other. For example, precise +memory profiling skews CPU profiles and goroutine blocking profiling affects scheduler +trace. Use tools in isolation to get more precise info. +
+ +
+Profiling is useful for identifying expensive or frequently called sections
+of code. The Go runtime provides
+profiling data in the format expected by the
+pprof visualization tool.
+The profiling data can be collected during testing
+via go
test
or endpoints made available from the
+net/http/pprof package. Users need to collect the profiling data and use pprof tools to filter
+and visualize the top code paths.
+
Predefined profiles provided by the runtime/pprof package:
+ +runtime.SetBlockProfileRate
to enable it.
+runtime.SetMutexProfileFraction
to enable it.
+What other profilers can I use to profile Go programs?
+ ++On Linux, perf tools +can be used for profiling Go programs. Perf can profile +and unwind cgo/SWIG code and kernel, so it can be useful to get insights into +native/kernel performance bottlenecks. On macOS, +Instruments +suite can be used profile Go programs. +
+ +Can I profile my production services?
+ +Yes. It is safe to profile programs in production, but enabling +some profiles (e.g. the CPU profile) adds cost. You should expect to +see performance downgrade. The performance penalty can be estimated +by measuring the overhead of the profiler before turning it on in +production. +
+ ++You may want to periodically profile your production services. +Especially in a system with many replicas of a single process, selecting +a random replica periodically is a safe option. +Select a production process, profile it for +X seconds for every Y seconds and save the results for visualization and +analysis; then repeat periodically. Results may be manually and/or automatically +reviewed to find problems. +Collection of profiles can interfere with each other, +so it is recommended to collect only a single profile at a time. +
+ ++What are the best ways to visualize the profiling data? +
+ +
+The Go tools provide text, graph, and callgrind
+visualization of the profile data using
+go tool pprof
.
+Read Profiling Go programs
+to see them in action.
+
+
+
+Listing of the most expensive calls as text.
+
+
+
+Visualization of the most expensive calls as a graph.
+
Weblist view displays the expensive parts of the source line by line in
+an HTML page. In the following example, 530ms is spent in the
+runtime.concatstrings
and cost of each line is presented
+in the listing.
+
+
+Visualization of the most expensive calls as weblist.
+
+Another way to visualize profile data is a flame graph. +Flame graphs allow you to move in a specific ancestry path, so you can zoom +in/out of specific sections of code. +The upstream pprof +has support for flame graphs. +
+ +
+
+
+Flame graphs offers visualization to spot the most expensive code-paths.
+
Am I restricted to the built-in profiles?
+ ++Additionally to what is provided by the runtime, Go users can create +their custom profiles via pprof.Profile +and use the existing tools to examine them. +
+ +Can I serve the profiler handlers (/debug/pprof/...) on a different path and port?
+ +
+Yes. The net/http/pprof
package registers its handlers to the default
+mux by default, but you can also register them yourself by using the handlers
+exported from the package.
+
+For example, the following example will serve the pprof.Profile +handler on :7777 at /custom_debug_path/profile: +
+ ++
+package main + +import ( + "log" + "net/http" + "net/http/pprof" +) + +func main() { + mux := http.NewServeMux() + mux.HandleFunc("/custom_debug_path/profile", pprof.Profile) + log.Fatal(http.ListenAndServe(":7777", mux)) +} ++ + +
+Tracing is a way to instrument code to analyze latency throughout the +lifecycle of a chain of calls. Go provides +golang.org/x/net/trace +package as a minimal tracing backend per Go node and provides a minimal +instrumentation library with a simple dashboard. Go also provides +an execution tracer to trace the runtime events within an interval. +
+ +Tracing enables us to:
+ ++In monolithic systems, it's relatively easy to collect diagnostic data +from the building blocks of a program. All modules live within one +process and share common resources to report logs, errors, and other +diagnostic information. Once your system grows beyond a single process and +starts to become distributed, it becomes harder to follow a call starting +from the front-end web server to all of its back-ends until a response is +returned back to the user. This is where distributed tracing plays a big +role to instrument and analyze your production systems. +
+ ++Distributed tracing is a way to instrument code to analyze latency throughout +the lifecycle of a user request. When a system is distributed and when +conventional profiling and debugging tools don’t scale, you might want +to use distributed tracing tools to analyze the performance of your user +requests and RPCs. +
+ +Distributed tracing enables us to:
+ +The Go ecosystem provides various distributed tracing libraries per tracing system +and backend-agnostic ones.
+ + +Is there a way to automatically intercept each function call and create traces?
+ ++Go doesn’t provide a way to automatically intercept every function call and create +trace spans. You need to manually instrument your code to create, end, and annotate spans. +
+ +How should I propagate trace headers in Go libraries?
+ +
+You can propagate trace identifiers and tags in the
+context.Context
.
+There is no canonical trace key or common representation of trace headers
+in the industry yet. Each tracing provider is responsible for providing propagation
+utilities in their Go libraries.
+
+What other low-level events from the standard library or +runtime can be included in a trace? +
+ +
+The standard library and runtime are trying to expose several additional APIs
+to notify on low level internal events. For example,
+httptrace.ClientTrace
+provides APIs to follow low-level events in the life cycle of an outgoing request.
+There is an ongoing effort to retrieve low-level runtime events from
+the runtime execution tracer and allow users to define and record their user events.
+
+Debugging is the process of identifying why a program misbehaves. +Debuggers allow us to understand a program’s execution flow and current state. +There are several styles of debugging; this section will only focus on attaching +a debugger to a program and core dump debugging. +
+ +Go users mostly use the following debuggers:
+ +How well do debuggers work with Go programs?
+ +
+The gc
compiler performs optimizations such as
+function inlining and variable registerization. These optimizations
+sometimes make debugging with debuggers harder. There is an ongoing
+effort to improve the quality of the DWARF information generated for
+optimized binaries. Until those improvements are available, we recommend
+disabling optimizations when building the code being debugged. The following
+command builds a package with no compiler optimizations:
+
+
+
+$ go build -gcflags=all="-N -l" ++ + +As part of the improvement effort, Go 1.10 introduced a new compiler +flag
-dwarflocationlists
. The flag causes the compiler to
+add location lists that helps debuggers work with optimized binaries.
+The following command builds a package with optimizations but with
+the DWARF location lists:
+
++
+$ go build -gcflags="-dwarflocationlists=true" ++ + +
What’s the recommended debugger user interface?
+ ++Even though both delve and gdb provides CLIs, most editor integrations +and IDEs provides debugging-specific user interfaces. +
+ +Is it possible to do postmortem debugging with Go programs?
+ ++A core dump file is a file that contains the memory dump of a running +process and its process status. It is primarily used for post-mortem +debugging of a program and to understand its state +while it is still running. These two cases make debugging of core +dumps a good diagnostic aid to postmortem and analyze production +services. It is possible to obtain core files from Go programs and +use delve or gdb to debug, see the +core dump debugging +page for a step-by-step guide. +
+ ++The runtime provides stats and reporting of internal events for +users to diagnose performance and utilization problems at the +runtime level. +
+ ++Users can monitor these stats to better understand the overall +health and performance of Go programs. +Some frequently monitored stats and states: +
+ +runtime.ReadMemStats
+reports the metrics related to heap
+allocation and garbage collection. Memory stats are useful for
+monitoring how much memory resources a process is consuming,
+whether the process can utilize memory well, and to catch
+memory leaks.debug.ReadGCStats
+reads statistics about garbage collection.
+It is useful to see how much of the resources are spent on GC pauses.
+It also reports a timeline of garbage collector pauses and pause time percentiles.debug.Stack
+returns the current stack trace. Stack trace
+is useful to see how many goroutines are currently running,
+what they are doing, and whether they are blocked or not.debug.WriteHeapDump
+suspends the execution of all goroutines
+and allows you to dump the heap to a file. A heap dump is a
+snapshot of a Go process' memory at a given time. It contains all
+allocated objects as well as goroutines, finalizers, and more.runtime.NumGoroutine
+returns the number of current goroutines.
+The value can be monitored to see whether enough goroutines are
+utilized, or to detect goroutine leaks.Go comes with a runtime execution tracer to capture a wide range +of runtime events. Scheduling, syscall, garbage collections, +heap size, and other events are collected by runtime and available +for visualization by the go tool trace. Execution tracer is a tool +to detect latency and utilization problems. You can examine how well +the CPU is utilized, and when networking or syscalls are a cause of +preemption for the goroutines.
+ +Tracer is useful to:
+However, it is not great for identifying hot spots such as +analyzing the cause of excessive memory or CPU usage. +Use profiling tools instead first to address them.
+ ++ +
+ +Above, the go tool trace visualization shows the execution started +fine, and then it became serialized. It suggests that there might +be lock contention for a shared resource that creates a bottleneck.
+ +See go
tool
trace
+to collect and analyze runtime traces.
+
Runtime also emits events and information if +GODEBUG +environmental variable is set accordingly.
+ +The GODEBUG environmental variable can be used to disable use of +instruction set extensions in the standard library and runtime.
+ ++ This document lists commonly used editor plugins and IDEs from the Go ecosystem + that make Go development more productive and seamless. + A comprehensive list of editor support and IDEs for Go development is available at + the wiki. +
+ ++The Go ecosystem provides a variety of editor plugins and IDEs to enhance your day-to-day +editing, navigation, testing, and debugging experience. +
+ ++Note that these are only a few top solutions; a more comprehensive +community-maintained list of +IDEs and text editor plugins +is available at the Wiki. +
diff --git a/_content/doc/effective_go.html b/_content/doc/effective_go.html new file mode 100644 index 00000000..76204029 --- /dev/null +++ b/_content/doc/effective_go.html @@ -0,0 +1,3673 @@ + + ++Go is a new language. Although it borrows ideas from +existing languages, +it has unusual properties that make effective Go programs +different in character from programs written in its relatives. +A straightforward translation of a C++ or Java program into Go +is unlikely to produce a satisfactory result—Java programs +are written in Java, not Go. +On the other hand, thinking about the problem from a Go +perspective could produce a successful but quite different +program. +In other words, +to write Go well, it's important to understand its properties +and idioms. +It's also important to know the established conventions for +programming in Go, such as naming, formatting, program +construction, and so on, so that programs you write +will be easy for other Go programmers to understand. +
+ ++This document gives tips for writing clear, idiomatic Go code. +It augments the language specification, +the Tour of Go, +and How to Write Go Code, +all of which you +should read first. +
+ ++The Go package sources +are intended to serve not +only as the core library but also as examples of how to +use the language. +Moreover, many of the packages contain working, self-contained +executable examples you can run directly from the +golang.org web site, such as +this one (if +necessary, click on the word "Example" to open it up). +If you have a question about how to approach a problem or how something +might be implemented, the documentation, code and examples in the +library can provide answers, ideas and +background. +
+ + ++Formatting issues are the most contentious +but the least consequential. +People can adapt to different formatting styles +but it's better if they don't have to, and +less time is devoted to the topic +if everyone adheres to the same style. +The problem is how to approach this Utopia without a long +prescriptive style guide. +
+ +
+With Go we take an unusual
+approach and let the machine
+take care of most formatting issues.
+The gofmt
program
+(also available as go fmt
, which
+operates at the package level rather than source file level)
+reads a Go program
+and emits the source in a standard style of indentation
+and vertical alignment, retaining and if necessary
+reformatting comments.
+If you want to know how to handle some new layout
+situation, run gofmt
; if the answer doesn't
+seem right, rearrange your program (or file a bug about gofmt
),
+don't work around it.
+
+As an example, there's no need to spend time lining up
+the comments on the fields of a structure.
+Gofmt
will do that for you. Given the
+declaration
+
+type T struct { + name string // name of the object + value int // its value +} ++ +
+gofmt
will line up the columns:
+
+type T struct { + name string // name of the object + value int // its value +} ++ +
+All Go code in the standard packages has been formatted with gofmt
.
+
+Some formatting details remain. Very briefly: +
+ +gofmt
emits them by default.
+ Use spaces only if you must.
+ if
,
+ for
, switch
) do not have parentheses in
+ their syntax.
+ Also, the operator precedence hierarchy is shorter and clearer, so
++x<<8 + y<<16 ++ means what the spacing implies, unlike in the other languages. +
+Go provides C-style /* */
block comments
+and C++-style //
line comments.
+Line comments are the norm;
+block comments appear mostly as package comments, but
+are useful within an expression or to disable large swaths of code.
+
+The program—and web server—godoc
processes
+Go source files to extract documentation about the contents of the
+package.
+Comments that appear before top-level declarations, with no intervening newlines,
+are extracted along with the declaration to serve as explanatory text for the item.
+The nature and style of these comments determines the
+quality of the documentation godoc
produces.
+
+Every package should have a package comment, a block
+comment preceding the package clause.
+For multi-file packages, the package comment only needs to be
+present in one file, and any one will do.
+The package comment should introduce the package and
+provide information relevant to the package as a whole.
+It will appear first on the godoc
page and
+should set up the detailed documentation that follows.
+
+/* +Package regexp implements a simple library for regular expressions. + +The syntax of the regular expressions accepted is: + + regexp: + concatenation { '|' concatenation } + concatenation: + { closure } + closure: + term [ '*' | '+' | '?' ] + term: + '^' + '$' + '.' + character + '[' [ '^' ] character-ranges ']' + '(' regexp ')' +*/ +package regexp ++ +
+If the package is simple, the package comment can be brief. +
+ ++// Package path implements utility routines for +// manipulating slash-separated filename paths. ++ +
+Comments do not need extra formatting such as banners of stars.
+The generated output may not even be presented in a fixed-width font, so don't depend
+on spacing for alignment—godoc
, like gofmt
,
+takes care of that.
+The comments are uninterpreted plain text, so HTML and other
+annotations such as _this_
will reproduce verbatim and should
+not be used.
+One adjustment godoc
does do is to display indented
+text in a fixed-width font, suitable for program snippets.
+The package comment for the
+fmt
package uses this to good effect.
+
+Depending on the context, godoc
might not even
+reformat comments, so make sure they look good straight up:
+use correct spelling, punctuation, and sentence structure,
+fold long lines, and so on.
+
+Inside a package, any comment immediately preceding a top-level declaration +serves as a doc comment for that declaration. +Every exported (capitalized) name in a program should +have a doc comment. +
+ ++Doc comments work best as complete sentences, which allow +a wide variety of automated presentations. +The first sentence should be a one-sentence summary that +starts with the name being declared. +
+ ++// Compile parses a regular expression and returns, if successful, +// a Regexp that can be used to match against text. +func Compile(str string) (*Regexp, error) { ++ +
+If every doc comment begins with the name of the item it describes,
+you can use the doc
+subcommand of the go tool
+and run the output through grep
.
+Imagine you couldn't remember the name "Compile" but were looking for
+the parsing function for regular expressions, so you ran
+the command,
+
+$ go doc -all regexp | grep -i parse ++ +
+If all the doc comments in the package began, "This function...", grep
+wouldn't help you remember the name. But because the package starts each
+doc comment with the name, you'd see something like this,
+which recalls the word you're looking for.
+
+$ go doc -all regexp | grep -i parse + Compile parses a regular expression and returns, if successful, a Regexp + MustCompile is like Compile but panics if the expression cannot be parsed. + parsed. It simplifies safe initialization of global variables holding +$ ++ +
+Go's declaration syntax allows grouping of declarations. +A single doc comment can introduce a group of related constants or variables. +Since the whole declaration is presented, such a comment can often be perfunctory. +
+ ++// Error codes returned by failures to parse an expression. +var ( + ErrInternal = errors.New("regexp: internal error") + ErrUnmatchedLpar = errors.New("regexp: unmatched '('") + ErrUnmatchedRpar = errors.New("regexp: unmatched ')'") + ... +) ++ +
+Grouping can also indicate relationships between items, +such as the fact that a set of variables is protected by a mutex. +
+ ++var ( + countLock sync.Mutex + inputCount uint32 + outputCount uint32 + errorCount uint32 +) ++ +
+Names are as important in Go as in any other language. +They even have semantic effect: +the visibility of a name outside a package is determined by whether its +first character is upper case. +It's therefore worth spending a little time talking about naming conventions +in Go programs. +
+ + ++When a package is imported, the package name becomes an accessor for the +contents. After +
+ ++import "bytes" ++ +
+the importing package can talk about bytes.Buffer
. It's
+helpful if everyone using the package can use the same name to refer to
+its contents, which implies that the package name should be good:
+short, concise, evocative. By convention, packages are given
+lower case, single-word names; there should be no need for underscores
+or mixedCaps.
+Err on the side of brevity, since everyone using your
+package will be typing that name.
+And don't worry about collisions a priori.
+The package name is only the default name for imports; it need not be unique
+across all source code, and in the rare case of a collision the
+importing package can choose a different name to use locally.
+In any case, confusion is rare because the file name in the import
+determines just which package is being used.
+
+Another convention is that the package name is the base name of
+its source directory;
+the package in src/encoding/base64
+is imported as "encoding/base64"
but has name base64
,
+not encoding_base64
and not encodingBase64
.
+
+The importer of a package will use the name to refer to its contents,
+so exported names in the package can use that fact
+to avoid stutter.
+(Don't use the import .
notation, which can simplify
+tests that must run outside the package they are testing, but should otherwise be avoided.)
+For instance, the buffered reader type in the bufio
package is called Reader
,
+not BufReader
, because users see it as bufio.Reader
,
+which is a clear, concise name.
+Moreover,
+because imported entities are always addressed with their package name, bufio.Reader
+does not conflict with io.Reader
.
+Similarly, the function to make new instances of ring.Ring
—which
+is the definition of a constructor in Go—would
+normally be called NewRing
, but since
+Ring
is the only type exported by the package, and since the
+package is called ring
, it's called just New
,
+which clients of the package see as ring.New
.
+Use the package structure to help you choose good names.
+
+Another short example is once.Do
;
+once.Do(setup)
reads well and would not be improved by
+writing once.DoOrWaitUntilDone(setup)
.
+Long names don't automatically make things more readable.
+A helpful doc comment can often be more valuable than an extra long name.
+
+Go doesn't provide automatic support for getters and setters.
+There's nothing wrong with providing getters and setters yourself,
+and it's often appropriate to do so, but it's neither idiomatic nor necessary
+to put Get
into the getter's name. If you have a field called
+owner
(lower case, unexported), the getter method should be
+called Owner
(upper case, exported), not GetOwner
.
+The use of upper-case names for export provides the hook to discriminate
+the field from the method.
+A setter function, if needed, will likely be called SetOwner
.
+Both names read well in practice:
+
+owner := obj.Owner() +if owner != user { + obj.SetOwner(user) +} ++ +
+By convention, one-method interfaces are named by
+the method name plus an -er suffix or similar modification
+to construct an agent noun: Reader
,
+Writer
, Formatter
,
+CloseNotifier
etc.
+
+There are a number of such names and it's productive to honor them and the function
+names they capture.
+Read
, Write
, Close
, Flush
,
+String
and so on have
+canonical signatures and meanings. To avoid confusion,
+don't give your method one of those names unless it
+has the same signature and meaning.
+Conversely, if your type implements a method with the
+same meaning as a method on a well-known type,
+give it the same name and signature;
+call your string-converter method String
not ToString
.
+
+Finally, the convention in Go is to use MixedCaps
+or mixedCaps
rather than underscores to write
+multiword names.
+
+Like C, Go's formal grammar uses semicolons to terminate statements, +but unlike in C, those semicolons do not appear in the source. +Instead the lexer uses a simple rule to insert semicolons automatically +as it scans, so the input text is mostly free of them. +
+ +
+The rule is this. If the last token before a newline is an identifier
+(which includes words like int
and float64
),
+a basic literal such as a number or string constant, or one of the
+tokens
+
+break continue fallthrough return ++ -- ) } ++
+the lexer always inserts a semicolon after the token. +This could be summarized as, “if the newline comes +after a token that could end a statement, insert a semicolon”. +
+ ++A semicolon can also be omitted immediately before a closing brace, +so a statement such as +
++ go func() { for { dst <- <-src } }() ++
+needs no semicolons.
+Idiomatic Go programs have semicolons only in places such as
+for
loop clauses, to separate the initializer, condition, and
+continuation elements. They are also necessary to separate multiple
+statements on a line, should you write code that way.
+
+One consequence of the semicolon insertion rules
+is that you cannot put the opening brace of a
+control structure (if
, for
, switch
,
+or select
) on the next line. If you do, a semicolon
+will be inserted before the brace, which could cause unwanted
+effects. Write them like this
+
+if i < f() { + g() +} ++
+not like this +
++if i < f() // wrong! +{ // wrong! + g() +} ++ + +
+The control structures of Go are related to those of C but differ
+in important ways.
+There is no do
or while
loop, only a
+slightly generalized
+for
;
+switch
is more flexible;
+if
and switch
accept an optional
+initialization statement like that of for
;
+break
and continue
statements
+take an optional label to identify what to break or continue;
+and there are new control structures including a type switch and a
+multiway communications multiplexer, select
.
+The syntax is also slightly different:
+there are no parentheses
+and the bodies must always be brace-delimited.
+
+In Go a simple if
looks like this:
+
+if x > 0 { + return y +} ++ +
+Mandatory braces encourage writing simple if
statements
+on multiple lines. It's good style to do so anyway,
+especially when the body contains a control statement such as a
+return
or break
.
+
+Since if
and switch
accept an initialization
+statement, it's common to see one used to set up a local variable.
+
+if err := file.Chmod(0664); err != nil { + log.Print(err) + return err +} ++ +
+In the Go libraries, you'll find that
+when an if
statement doesn't flow into the next statement—that is,
+the body ends in break
, continue
,
+goto
, or return
—the unnecessary
+else
is omitted.
+
+f, err := os.Open(name) +if err != nil { + return err +} +codeUsing(f) ++ +
+This is an example of a common situation where code must guard against a
+sequence of error conditions. The code reads well if the
+successful flow of control runs down the page, eliminating error cases
+as they arise. Since error cases tend to end in return
+statements, the resulting code needs no else
statements.
+
+f, err := os.Open(name) +if err != nil { + return err +} +d, err := f.Stat() +if err != nil { + f.Close() + return err +} +codeUsing(f, d) ++ + +
+An aside: The last example in the previous section demonstrates a detail of how the
+:=
short declaration form works.
+The declaration that calls os.Open
reads,
+
+f, err := os.Open(name) ++ +
+This statement declares two variables, f
and err
.
+A few lines later, the call to f.Stat
reads,
+
+d, err := f.Stat() ++ +
+which looks as if it declares d
and err
.
+Notice, though, that err
appears in both statements.
+This duplication is legal: err
is declared by the first statement,
+but only re-assigned in the second.
+This means that the call to f.Stat
uses the existing
+err
variable declared above, and just gives it a new value.
+
+In a :=
declaration a variable v
may appear even
+if it has already been declared, provided:
+
v
+(if v
is already declared in an outer scope, the declaration will create a new variable §),v
, and
+This unusual property is pure pragmatism,
+making it easy to use a single err
value, for example,
+in a long if-else
chain.
+You'll see it used often.
+
+§ It's worth noting here that in Go the scope of function parameters and return values +is the same as the function body, even though they appear lexically outside the braces +that enclose the body. +
+ +
+The Go for
loop is similar to—but not the same as—C's.
+It unifies for
+and while
and there is no do-while
.
+There are three forms, only one of which has semicolons.
+
+// Like a C for +for init; condition; post { } + +// Like a C while +for condition { } + +// Like a C for(;;) +for { } ++ +
+Short declarations make it easy to declare the index variable right in the loop. +
++sum := 0 +for i := 0; i < 10; i++ { + sum += i +} ++ +
+If you're looping over an array, slice, string, or map,
+or reading from a channel, a range
clause can
+manage the loop.
+
+for key, value := range oldMap { + newMap[key] = value +} ++ +
+If you only need the first item in the range (the key or index), drop the second: +
++for key := range m { + if key.expired() { + delete(m, key) + } +} ++ +
+If you only need the second item in the range (the value), use the blank identifier, an underscore, to discard the first: +
++sum := 0 +for _, value := range array { + sum += value +} ++ +
+The blank identifier has many uses, as described in a later section. +
+ +
+For strings, the range
does more work for you, breaking out individual
+Unicode code points by parsing the UTF-8.
+Erroneous encodings consume one byte and produce the
+replacement rune U+FFFD.
+(The name (with associated builtin type) rune
is Go terminology for a
+single Unicode code point.
+See the language specification
+for details.)
+The loop
+
+for pos, char := range "日本\x80語" { // \x80 is an illegal UTF-8 encoding + fmt.Printf("character %#U starts at byte position %d\n", char, pos) +} ++
+prints +
++character U+65E5 '日' starts at byte position 0 +character U+672C '本' starts at byte position 3 +character U+FFFD '�' starts at byte position 6 +character U+8A9E '語' starts at byte position 7 ++ +
+Finally, Go has no comma operator and ++
and --
+are statements not expressions.
+Thus if you want to run multiple variables in a for
+you should use parallel assignment (although that precludes ++
and --
).
+
+// Reverse a +for i, j := 0, len(a)-1; i < j; i, j = i+1, j-1 { + a[i], a[j] = a[j], a[i] +} ++ +
+Go's switch
is more general than C's.
+The expressions need not be constants or even integers,
+the cases are evaluated top to bottom until a match is found,
+and if the switch
has no expression it switches on
+true
.
+It's therefore possible—and idiomatic—to write an
+if
-else
-if
-else
+chain as a switch
.
+
+func unhex(c byte) byte { + switch { + case '0' <= c && c <= '9': + return c - '0' + case 'a' <= c && c <= 'f': + return c - 'a' + 10 + case 'A' <= c && c <= 'F': + return c - 'A' + 10 + } + return 0 +} ++ +
+There is no automatic fall through, but cases can be presented +in comma-separated lists. +
++func shouldEscape(c byte) bool { + switch c { + case ' ', '?', '&', '=', '#', '+', '%': + return true + } + return false +} ++ +
+Although they are not nearly as common in Go as some other C-like
+languages, break
statements can be used to terminate
+a switch
early.
+Sometimes, though, it's necessary to break out of a surrounding loop,
+not the switch, and in Go that can be accomplished by putting a label
+on the loop and "breaking" to that label.
+This example shows both uses.
+
+Loop: + for n := 0; n < len(src); n += size { + switch { + case src[n] < sizeOne: + if validateOnly { + break + } + size = 1 + update(src[n]) + + case src[n] < sizeTwo: + if n+1 >= len(src) { + err = errShortInput + break Loop + } + if validateOnly { + break + } + size = 2 + update(src[n] + src[n+1]<<shift) + } + } ++ +
+Of course, the continue
statement also accepts an optional label
+but it applies only to loops.
+
+To close this section, here's a comparison routine for byte slices that uses two
+switch
statements:
+
+// Compare returns an integer comparing the two byte slices, +// lexicographically. +// The result will be 0 if a == b, -1 if a < b, and +1 if a > b +func Compare(a, b []byte) int { + for i := 0; i < len(a) && i < len(b); i++ { + switch { + case a[i] > b[i]: + return 1 + case a[i] < b[i]: + return -1 + } + } + switch { + case len(a) > len(b): + return 1 + case len(a) < len(b): + return -1 + } + return 0 +} ++ +
+A switch can also be used to discover the dynamic type of an interface
+variable. Such a type switch uses the syntax of a type
+assertion with the keyword type
inside the parentheses.
+If the switch declares a variable in the expression, the variable will
+have the corresponding type in each clause.
+It's also idiomatic to reuse the name in such cases, in effect declaring
+a new variable with the same name but a different type in each case.
+
+var t interface{} +t = functionOfSomeType() +switch t := t.(type) { +default: + fmt.Printf("unexpected type %T\n", t) // %T prints whatever type t has +case bool: + fmt.Printf("boolean %t\n", t) // t has type bool +case int: + fmt.Printf("integer %d\n", t) // t has type int +case *bool: + fmt.Printf("pointer to boolean %t\n", *t) // t has type *bool +case *int: + fmt.Printf("pointer to integer %d\n", *t) // t has type *int +} ++ +
+One of Go's unusual features is that functions and methods
+can return multiple values. This form can be used to
+improve on a couple of clumsy idioms in C programs: in-band
+error returns such as -1
for EOF
+and modifying an argument passed by address.
+
+In C, a write error is signaled by a negative count with the
+error code secreted away in a volatile location.
+In Go, Write
+can return a count and an error: “Yes, you wrote some
+bytes but not all of them because you filled the device”.
+The signature of the Write
method on files from
+package os
is:
+
+func (file *File) Write(b []byte) (n int, err error) ++ +
+and as the documentation says, it returns the number of bytes
+written and a non-nil error
when n
+!=
len(b)
.
+This is a common style; see the section on error handling for more examples.
+
+A similar approach obviates the need to pass a pointer to a return +value to simulate a reference parameter. +Here's a simple-minded function to +grab a number from a position in a byte slice, returning the number +and the next position. +
+ ++func nextInt(b []byte, i int) (int, int) { + for ; i < len(b) && !isDigit(b[i]); i++ { + } + x := 0 + for ; i < len(b) && isDigit(b[i]); i++ { + x = x*10 + int(b[i]) - '0' + } + return x, i +} ++ +
+You could use it to scan the numbers in an input slice b
like this:
+
+ for i := 0; i < len(b); { + x, i = nextInt(b, i) + fmt.Println(x) + } ++ +
+The return or result "parameters" of a Go function can be given names and
+used as regular variables, just like the incoming parameters.
+When named, they are initialized to the zero values for their types when
+the function begins; if the function executes a return
statement
+with no arguments, the current values of the result parameters are
+used as the returned values.
+
+The names are not mandatory but they can make code shorter and clearer:
+they're documentation.
+If we name the results of nextInt
it becomes
+obvious which returned int
+is which.
+
+func nextInt(b []byte, pos int) (value, nextPos int) { ++ +
+Because named results are initialized and tied to an unadorned return, they can simplify
+as well as clarify. Here's a version
+of io.ReadFull
that uses them well:
+
+func ReadFull(r Reader, buf []byte) (n int, err error) { + for len(buf) > 0 && err == nil { + var nr int + nr, err = r.Read(buf) + n += nr + buf = buf[nr:] + } + return +} ++ +
+Go's defer
statement schedules a function call (the
+deferred function) to be run immediately before the function
+executing the defer
returns. It's an unusual but
+effective way to deal with situations such as resources that must be
+released regardless of which path a function takes to return. The
+canonical examples are unlocking a mutex or closing a file.
+
+// Contents returns the file's contents as a string. +func Contents(filename string) (string, error) { + f, err := os.Open(filename) + if err != nil { + return "", err + } + defer f.Close() // f.Close will run when we're finished. + + var result []byte + buf := make([]byte, 100) + for { + n, err := f.Read(buf[0:]) + result = append(result, buf[0:n]...) // append is discussed later. + if err != nil { + if err == io.EOF { + break + } + return "", err // f will be closed if we return here. + } + } + return string(result), nil // f will be closed if we return here. +} ++ +
+Deferring a call to a function such as Close
has two advantages. First, it
+guarantees that you will never forget to close the file, a mistake
+that's easy to make if you later edit the function to add a new return
+path. Second, it means that the close sits near the open,
+which is much clearer than placing it at the end of the function.
+
+The arguments to the deferred function (which include the receiver if +the function is a method) are evaluated when the defer +executes, not when the call executes. Besides avoiding worries +about variables changing values as the function executes, this means +that a single deferred call site can defer multiple function +executions. Here's a silly example. +
+ ++for i := 0; i < 5; i++ { + defer fmt.Printf("%d ", i) +} ++ +
+Deferred functions are executed in LIFO order, so this code will cause
+4 3 2 1 0
to be printed when the function returns. A
+more plausible example is a simple way to trace function execution
+through the program. We could write a couple of simple tracing
+routines like this:
+
+func trace(s string) { fmt.Println("entering:", s) } +func untrace(s string) { fmt.Println("leaving:", s) } + +// Use them like this: +func a() { + trace("a") + defer untrace("a") + // do something.... +} ++ +
+We can do better by exploiting the fact that arguments to deferred
+functions are evaluated when the defer
executes. The
+tracing routine can set up the argument to the untracing routine.
+This example:
+
+func trace(s string) string { + fmt.Println("entering:", s) + return s +} + +func un(s string) { + fmt.Println("leaving:", s) +} + +func a() { + defer un(trace("a")) + fmt.Println("in a") +} + +func b() { + defer un(trace("b")) + fmt.Println("in b") + a() +} + +func main() { + b() +} ++ +
+prints +
+ ++entering: b +in b +entering: a +in a +leaving: a +leaving: b ++ +
+For programmers accustomed to block-level resource management from
+other languages, defer
may seem peculiar, but its most
+interesting and powerful applications come precisely from the fact
+that it's not block-based but function-based. In the section on
+panic
and recover
we'll see another
+example of its possibilities.
+
new
+Go has two allocation primitives, the built-in functions
+new
and make
.
+They do different things and apply to different types, which can be confusing,
+but the rules are simple.
+Let's talk about new
first.
+It's a built-in function that allocates memory, but unlike its namesakes
+in some other languages it does not initialize the memory,
+it only zeros it.
+That is,
+new(T)
allocates zeroed storage for a new item of type
+T
and returns its address, a value of type *T
.
+In Go terminology, it returns a pointer to a newly allocated zero value of type
+T
.
+
+Since the memory returned by new
is zeroed, it's helpful to arrange
+when designing your data structures that the
+zero value of each type can be used without further initialization. This means a user of
+the data structure can create one with new
and get right to
+work.
+For example, the documentation for bytes.Buffer
states that
+"the zero value for Buffer
is an empty buffer ready to use."
+Similarly, sync.Mutex
does not
+have an explicit constructor or Init
method.
+Instead, the zero value for a sync.Mutex
+is defined to be an unlocked mutex.
+
+The zero-value-is-useful property works transitively. Consider this type declaration. +
+ ++type SyncedBuffer struct { + lock sync.Mutex + buffer bytes.Buffer +} ++ +
+Values of type SyncedBuffer
are also ready to use immediately upon allocation
+or just declaration. In the next snippet, both p
and v
will work
+correctly without further arrangement.
+
+p := new(SyncedBuffer) // type *SyncedBuffer +var v SyncedBuffer // type SyncedBuffer ++ +
+Sometimes the zero value isn't good enough and an initializing
+constructor is necessary, as in this example derived from
+package os
.
+
+func NewFile(fd int, name string) *File { + if fd < 0 { + return nil + } + f := new(File) + f.fd = fd + f.name = name + f.dirinfo = nil + f.nepipe = 0 + return f +} ++ +
+There's a lot of boiler plate in there. We can simplify it +using a composite literal, which is +an expression that creates a +new instance each time it is evaluated. +
+ ++func NewFile(fd int, name string) *File { + if fd < 0 { + return nil + } + f := File{fd, name, nil, 0} + return &f +} ++ +
+Note that, unlike in C, it's perfectly OK to return the address of a local variable; +the storage associated with the variable survives after the function +returns. +In fact, taking the address of a composite literal +allocates a fresh instance each time it is evaluated, +so we can combine these last two lines. +
+ ++ return &File{fd, name, nil, 0} ++ +
+The fields of a composite literal are laid out in order and must all be present.
+However, by labeling the elements explicitly as field:
value
+pairs, the initializers can appear in any
+order, with the missing ones left as their respective zero values. Thus we could say
+
+ return &File{fd: fd, name: name} ++ +
+As a limiting case, if a composite literal contains no fields at all, it creates
+a zero value for the type. The expressions new(File)
and &File{}
are equivalent.
+
+Composite literals can also be created for arrays, slices, and maps,
+with the field labels being indices or map keys as appropriate.
+In these examples, the initializations work regardless of the values of Enone
,
+Eio
, and Einval
, as long as they are distinct.
+
+a := [...]string {Enone: "no error", Eio: "Eio", Einval: "invalid argument"} +s := []string {Enone: "no error", Eio: "Eio", Einval: "invalid argument"} +m := map[int]string{Enone: "no error", Eio: "Eio", Einval: "invalid argument"} ++ +
make
+Back to allocation.
+The built-in function make(T,
args)
serves
+a purpose different from new(T)
.
+It creates slices, maps, and channels only, and it returns an initialized
+(not zeroed)
+value of type T
(not *T
).
+The reason for the distinction
+is that these three types represent, under the covers, references to data structures that
+must be initialized before use.
+A slice, for example, is a three-item descriptor
+containing a pointer to the data (inside an array), the length, and the
+capacity, and until those items are initialized, the slice is nil
.
+For slices, maps, and channels,
+make
initializes the internal data structure and prepares
+the value for use.
+For instance,
+
+make([]int, 10, 100) ++ +
+allocates an array of 100 ints and then creates a slice
+structure with length 10 and a capacity of 100 pointing at the first
+10 elements of the array.
+(When making a slice, the capacity can be omitted; see the section on slices
+for more information.)
+In contrast, new([]int)
returns a pointer to a newly allocated, zeroed slice
+structure, that is, a pointer to a nil
slice value.
+
+These examples illustrate the difference between new
and
+make
.
+
+var p *[]int = new([]int) // allocates slice structure; *p == nil; rarely useful +var v []int = make([]int, 100) // the slice v now refers to a new array of 100 ints + +// Unnecessarily complex: +var p *[]int = new([]int) +*p = make([]int, 100, 100) + +// Idiomatic: +v := make([]int, 100) ++ +
+Remember that make
applies only to maps, slices and channels
+and does not return a pointer.
+To obtain an explicit pointer allocate with new
or take the address
+of a variable explicitly.
+
+Arrays are useful when planning the detailed layout of memory and sometimes +can help avoid allocation, but primarily +they are a building block for slices, the subject of the next section. +To lay the foundation for that topic, here are a few words about arrays. +
+ ++There are major differences between the ways arrays work in Go and C. +In Go, +
+[10]int
+and [20]int
are distinct.
++The value property can be useful but also expensive; if you want C-like behavior and efficiency, +you can pass a pointer to the array. +
+ ++func Sum(a *[3]float64) (sum float64) { + for _, v := range *a { + sum += v + } + return +} + +array := [...]float64{7.0, 8.5, 9.1} +x := Sum(&array) // Note the explicit address-of operator ++ +
+But even this style isn't idiomatic Go. +Use slices instead. +
+ ++Slices wrap arrays to give a more general, powerful, and convenient +interface to sequences of data. Except for items with explicit +dimension such as transformation matrices, most array programming in +Go is done with slices rather than simple arrays. +
+
+Slices hold references to an underlying array, and if you assign one
+slice to another, both refer to the same array.
+If a function takes a slice argument, changes it makes to
+the elements of the slice will be visible to the caller, analogous to
+passing a pointer to the underlying array. A Read
+function can therefore accept a slice argument rather than a pointer
+and a count; the length within the slice sets an upper
+limit of how much data to read. Here is the signature of the
+Read
method of the File
type in package
+os
:
+
+func (f *File) Read(buf []byte) (n int, err error) ++
+The method returns the number of bytes read and an error value, if
+any.
+To read into the first 32 bytes of a larger buffer
+buf
, slice (here used as a verb) the buffer.
+
+ n, err := f.Read(buf[0:32]) ++
+Such slicing is common and efficient. In fact, leaving efficiency aside for +the moment, the following snippet would also read the first 32 bytes of the buffer. +
++ var n int + var err error + for i := 0; i < 32; i++ { + nbytes, e := f.Read(buf[i:i+1]) // Read one byte. + n += nbytes + if nbytes == 0 || e != nil { + err = e + break + } + } ++
+The length of a slice may be changed as long as it still fits within
+the limits of the underlying array; just assign it to a slice of
+itself. The capacity of a slice, accessible by the built-in
+function cap
, reports the maximum length the slice may
+assume. Here is a function to append data to a slice. If the data
+exceeds the capacity, the slice is reallocated. The
+resulting slice is returned. The function uses the fact that
+len
and cap
are legal when applied to the
+nil
slice, and return 0.
+
+func Append(slice, data []byte) []byte { + l := len(slice) + if l + len(data) > cap(slice) { // reallocate + // Allocate double what's needed, for future growth. + newSlice := make([]byte, (l+len(data))*2) + // The copy function is predeclared and works for any slice type. + copy(newSlice, slice) + slice = newSlice + } + slice = slice[0:l+len(data)] + copy(slice[l:], data) + return slice +} ++
+We must return the slice afterwards because, although Append
+can modify the elements of slice
, the slice itself (the run-time data
+structure holding the pointer, length, and capacity) is passed by value.
+
+The idea of appending to a slice is so useful it's captured by the
+append
built-in function. To understand that function's
+design, though, we need a little more information, so we'll return
+to it later.
+
+Go's arrays and slices are one-dimensional. +To create the equivalent of a 2D array or slice, it is necessary to define an array-of-arrays +or slice-of-slices, like this: +
+ ++type Transform [3][3]float64 // A 3x3 array, really an array of arrays. +type LinesOfText [][]byte // A slice of byte slices. ++ +
+Because slices are variable-length, it is possible to have each inner
+slice be a different length.
+That can be a common situation, as in our LinesOfText
+example: each line has an independent length.
+
+text := LinesOfText{ + []byte("Now is the time"), + []byte("for all good gophers"), + []byte("to bring some fun to the party."), +} ++ +
+Sometimes it's necessary to allocate a 2D slice, a situation that can arise when +processing scan lines of pixels, for instance. +There are two ways to achieve this. +One is to allocate each slice independently; the other +is to allocate a single array and point the individual slices into it. +Which to use depends on your application. +If the slices might grow or shrink, they should be allocated independently +to avoid overwriting the next line; if not, it can be more efficient to construct +the object with a single allocation. +For reference, here are sketches of the two methods. +First, a line at a time: +
+ ++// Allocate the top-level slice. +picture := make([][]uint8, YSize) // One row per unit of y. +// Loop over the rows, allocating the slice for each row. +for i := range picture { + picture[i] = make([]uint8, XSize) +} ++ +
+And now as one allocation, sliced into lines: +
+ ++// Allocate the top-level slice, the same as before. +picture := make([][]uint8, YSize) // One row per unit of y. +// Allocate one large slice to hold all the pixels. +pixels := make([]uint8, XSize*YSize) // Has type []uint8 even though picture is [][]uint8. +// Loop over the rows, slicing each row from the front of the remaining pixels slice. +for i := range picture { + picture[i], pixels = pixels[:XSize], pixels[XSize:] +} ++ +
+Maps are a convenient and powerful built-in data structure that associate +values of one type (the key) with values of another type +(the element or value). +The key can be of any type for which the equality operator is defined, +such as integers, +floating point and complex numbers, +strings, pointers, interfaces (as long as the dynamic type +supports equality), structs and arrays. +Slices cannot be used as map keys, +because equality is not defined on them. +Like slices, maps hold references to an underlying data structure. +If you pass a map to a function +that changes the contents of the map, the changes will be visible +in the caller. +
++Maps can be constructed using the usual composite literal syntax +with colon-separated key-value pairs, +so it's easy to build them during initialization. +
++var timeZone = map[string]int{ + "UTC": 0*60*60, + "EST": -5*60*60, + "CST": -6*60*60, + "MST": -7*60*60, + "PST": -8*60*60, +} ++
+Assigning and fetching map values looks syntactically just like +doing the same for arrays and slices except that the index doesn't +need to be an integer. +
++offset := timeZone["EST"] ++
+An attempt to fetch a map value with a key that
+is not present in the map will return the zero value for the type
+of the entries
+in the map. For instance, if the map contains integers, looking
+up a non-existent key will return 0
.
+A set can be implemented as a map with value type bool
.
+Set the map entry to true
to put the value in the set, and then
+test it by simple indexing.
+
+attended := map[string]bool{ + "Ann": true, + "Joe": true, + ... +} + +if attended[person] { // will be false if person is not in the map + fmt.Println(person, "was at the meeting") +} ++
+Sometimes you need to distinguish a missing entry from
+a zero value. Is there an entry for "UTC"
+or is that 0 because it's not in the map at all?
+You can discriminate with a form of multiple assignment.
+
+var seconds int +var ok bool +seconds, ok = timeZone[tz] ++
+For obvious reasons this is called the “comma ok” idiom.
+In this example, if tz
is present, seconds
+will be set appropriately and ok
will be true; if not,
+seconds
will be set to zero and ok
will
+be false.
+Here's a function that puts it together with a nice error report:
+
+func offset(tz string) int { + if seconds, ok := timeZone[tz]; ok { + return seconds + } + log.Println("unknown time zone:", tz) + return 0 +} ++
+To test for presence in the map without worrying about the actual value,
+you can use the blank identifier (_
)
+in place of the usual variable for the value.
+
+_, present := timeZone[tz] ++
+To delete a map entry, use the delete
+built-in function, whose arguments are the map and the key to be deleted.
+It's safe to do this even if the key is already absent
+from the map.
+
+delete(timeZone, "PDT") // Now on Standard Time ++ +
+Formatted printing in Go uses a style similar to C's printf
+family but is richer and more general. The functions live in the fmt
+package and have capitalized names: fmt.Printf
, fmt.Fprintf
,
+fmt.Sprintf
and so on. The string functions (Sprintf
etc.)
+return a string rather than filling in a provided buffer.
+
+You don't need to provide a format string. For each of Printf
,
+Fprintf
and Sprintf
there is another pair
+of functions, for instance Print
and Println
.
+These functions do not take a format string but instead generate a default
+format for each argument. The Println
versions also insert a blank
+between arguments and append a newline to the output while
+the Print
versions add blanks only if the operand on neither side is a string.
+In this example each line produces the same output.
+
+fmt.Printf("Hello %d\n", 23) +fmt.Fprint(os.Stdout, "Hello ", 23, "\n") +fmt.Println("Hello", 23) +fmt.Println(fmt.Sprint("Hello ", 23)) ++
+The formatted print functions fmt.Fprint
+and friends take as a first argument any object
+that implements the io.Writer
interface; the variables os.Stdout
+and os.Stderr
are familiar instances.
+
+Here things start to diverge from C. First, the numeric formats such as %d
+do not take flags for signedness or size; instead, the printing routines use the
+type of the argument to decide these properties.
+
+var x uint64 = 1<<64 - 1 +fmt.Printf("%d %x; %d %x\n", x, x, int64(x), int64(x)) ++
+prints +
++18446744073709551615 ffffffffffffffff; -1 -1 ++
+If you just want the default conversion, such as decimal for integers, you can use
+the catchall format %v
(for “value”); the result is exactly
+what Print
and Println
would produce.
+Moreover, that format can print any value, even arrays, slices, structs, and
+maps. Here is a print statement for the time zone map defined in the previous section.
+
+fmt.Printf("%v\n", timeZone) // or just fmt.Println(timeZone) ++
+which gives output: +
++map[CST:-21600 EST:-18000 MST:-25200 PST:-28800 UTC:0] ++
+For maps, Printf
and friends sort the output lexicographically by key.
+
+When printing a struct, the modified format %+v
annotates the
+fields of the structure with their names, and for any value the alternate
+format %#v
prints the value in full Go syntax.
+
+type T struct { + a int + b float64 + c string +} +t := &T{ 7, -2.35, "abc\tdef" } +fmt.Printf("%v\n", t) +fmt.Printf("%+v\n", t) +fmt.Printf("%#v\n", t) +fmt.Printf("%#v\n", timeZone) ++
+prints +
++&{7 -2.35 abc def} +&{a:7 b:-2.35 c:abc def} +&main.T{a:7, b:-2.35, c:"abc\tdef"} +map[string]int{"CST":-21600, "EST":-18000, "MST":-25200, "PST":-28800, "UTC":0} ++
+(Note the ampersands.)
+That quoted string format is also available through %q
when
+applied to a value of type string
or []byte
.
+The alternate format %#q
will use backquotes instead if possible.
+(The %q
format also applies to integers and runes, producing a
+single-quoted rune constant.)
+Also, %x
works on strings, byte arrays and byte slices as well as
+on integers, generating a long hexadecimal string, and with
+a space in the format (% x
) it puts spaces between the bytes.
+
+Another handy format is %T
, which prints the type of a value.
+
+fmt.Printf("%T\n", timeZone) ++
+prints +
++map[string]int ++
+If you want to control the default format for a custom type, all that's required is to define
+a method with the signature String() string
on the type.
+For our simple type T
, that might look like this.
+
+func (t *T) String() string { + return fmt.Sprintf("%d/%g/%q", t.a, t.b, t.c) +} +fmt.Printf("%v\n", t) ++
+to print in the format +
++7/-2.35/"abc\tdef" ++
+(If you need to print values of type T
as well as pointers to T
,
+the receiver for String
must be of value type; this example used a pointer because
+that's more efficient and idiomatic for struct types.
+See the section below on pointers vs. value receivers for more information.)
+
+Our String
method is able to call Sprintf
because the
+print routines are fully reentrant and can be wrapped this way.
+There is one important detail to understand about this approach,
+however: don't construct a String
method by calling
+Sprintf
in a way that will recur into your String
+method indefinitely. This can happen if the Sprintf
+call attempts to print the receiver directly as a string, which in
+turn will invoke the method again. It's a common and easy mistake
+to make, as this example shows.
+
+type MyString string + +func (m MyString) String() string { + return fmt.Sprintf("MyString=%s", m) // Error: will recur forever. +} ++ +
+It's also easy to fix: convert the argument to the basic string type, which does not have the +method. +
+ ++type MyString string +func (m MyString) String() string { + return fmt.Sprintf("MyString=%s", string(m)) // OK: note conversion. +} ++ +
+In the initialization section we'll see another technique that avoids this recursion. +
+ +
+Another printing technique is to pass a print routine's arguments directly to another such routine.
+The signature of Printf
uses the type ...interface{}
+for its final argument to specify that an arbitrary number of parameters (of arbitrary type)
+can appear after the format.
+
+func Printf(format string, v ...interface{}) (n int, err error) { ++
+Within the function Printf
, v
acts like a variable of type
+[]interface{}
but if it is passed to another variadic function, it acts like
+a regular list of arguments.
+Here is the implementation of the
+function log.Println
we used above. It passes its arguments directly to
+fmt.Sprintln
for the actual formatting.
+
+// Println prints to the standard logger in the manner of fmt.Println. +func Println(v ...interface{}) { + std.Output(2, fmt.Sprintln(v...)) // Output takes parameters (int, string) +} ++
+We write ...
after v
in the nested call to Sprintln
to tell the
+compiler to treat v
as a list of arguments; otherwise it would just pass
+v
as a single slice argument.
+
+There's even more to printing than we've covered here. See the godoc
documentation
+for package fmt
for the details.
+
+By the way, a ...
parameter can be of a specific type, for instance ...int
+for a min function that chooses the least of a list of integers:
+
+func Min(a ...int) int { + min := int(^uint(0) >> 1) // largest int + for _, i := range a { + if i < min { + min = i + } + } + return min +} ++ +
+Now we have the missing piece we needed to explain the design of
+the append
built-in function. The signature of append
+is different from our custom Append
function above.
+Schematically, it's like this:
+
+func append(slice []T, elements ...T) []T ++
+where T is a placeholder for any given type. You can't
+actually write a function in Go where the type T
+is determined by the caller.
+That's why append
is built in: it needs support from the
+compiler.
+
+What append
does is append the elements to the end of
+the slice and return the result. The result needs to be returned
+because, as with our hand-written Append
, the underlying
+array may change. This simple example
+
+x := []int{1,2,3} +x = append(x, 4, 5, 6) +fmt.Println(x) ++
+prints [1 2 3 4 5 6]
. So append
works a
+little like Printf
, collecting an arbitrary number of
+arguments.
+
+But what if we wanted to do what our Append
does and
+append a slice to a slice? Easy: use ...
at the call
+site, just as we did in the call to Output
above. This
+snippet produces identical output to the one above.
+
+x := []int{1,2,3} +y := []int{4,5,6} +x = append(x, y...) +fmt.Println(x) ++
+Without that ...
, it wouldn't compile because the types
+would be wrong; y
is not of type int
.
+
+Although it doesn't look superficially very different from +initialization in C or C++, initialization in Go is more powerful. +Complex structures can be built during initialization and the ordering +issues among initialized objects, even among different packages, are handled +correctly. +
+ +
+Constants in Go are just that—constant.
+They are created at compile time, even when defined as
+locals in functions,
+and can only be numbers, characters (runes), strings or booleans.
+Because of the compile-time restriction, the expressions
+that define them must be constant expressions,
+evaluatable by the compiler. For instance,
+1<<3
is a constant expression, while
+math.Sin(math.Pi/4)
is not because
+the function call to math.Sin
needs
+to happen at run time.
+
+In Go, enumerated constants are created using the iota
+enumerator. Since iota
can be part of an expression and
+expressions can be implicitly repeated, it is easy to build intricate
+sets of values.
+
+The ability to attach a method such as String
to any
+user-defined type makes it possible for arbitrary values to format themselves
+automatically for printing.
+Although you'll see it most often applied to structs, this technique is also useful for
+scalar types such as floating-point types like ByteSize
.
+
+The expression YB
prints as 1.00YB
,
+while ByteSize(1e13)
prints as 9.09TB
.
+
+The use here of Sprintf
+to implement ByteSize
's String
method is safe
+(avoids recurring indefinitely) not because of a conversion but
+because it calls Sprintf
with %f
,
+which is not a string format: Sprintf
will only call
+the String
method when it wants a string, and %f
+wants a floating-point value.
+
+Variables can be initialized just like constants but the +initializer can be a general expression computed at run time. +
++var ( + home = os.Getenv("HOME") + user = os.Getenv("USER") + gopath = os.Getenv("GOPATH") +) ++ +
+Finally, each source file can define its own niladic init
function to
+set up whatever state is required. (Actually each file can have multiple
+init
functions.)
+And finally means finally: init
is called after all the
+variable declarations in the package have evaluated their initializers,
+and those are evaluated only after all the imported packages have been
+initialized.
+
+Besides initializations that cannot be expressed as declarations,
+a common use of init
functions is to verify or repair
+correctness of the program state before real execution begins.
+
+func init() { + if user == "" { + log.Fatal("$USER not set") + } + if home == "" { + home = "/home/" + user + } + if gopath == "" { + gopath = home + "/go" + } + // gopath may be overridden by --gopath flag on command line. + flag.StringVar(&gopath, "gopath", gopath, "override default GOPATH") +} ++ +
+As we saw with ByteSize
,
+methods can be defined for any named type (except a pointer or an interface);
+the receiver does not have to be a struct.
+
+In the discussion of slices above, we wrote an Append
+function. We can define it as a method on slices instead. To do
+this, we first declare a named type to which we can bind the method, and
+then make the receiver for the method a value of that type.
+
+type ByteSlice []byte + +func (slice ByteSlice) Append(data []byte) []byte { + // Body exactly the same as the Append function defined above. +} ++
+This still requires the method to return the updated slice. We can
+eliminate that clumsiness by redefining the method to take a
+pointer to a ByteSlice
as its receiver, so the
+method can overwrite the caller's slice.
+
+func (p *ByteSlice) Append(data []byte) { + slice := *p + // Body as above, without the return. + *p = slice +} ++
+In fact, we can do even better. If we modify our function so it looks
+like a standard Write
method, like this,
+
+func (p *ByteSlice) Write(data []byte) (n int, err error) { + slice := *p + // Again as above. + *p = slice + return len(data), nil +} ++
+then the type *ByteSlice
satisfies the standard interface
+io.Writer
, which is handy. For instance, we can
+print into one.
+
+ var b ByteSlice + fmt.Fprintf(&b, "This hour has %d days\n", 7) ++
+We pass the address of a ByteSlice
+because only *ByteSlice
satisfies io.Writer
.
+The rule about pointers vs. values for receivers is that value methods
+can be invoked on pointers and values, but pointer methods can only be
+invoked on pointers.
+
+This rule arises because pointer methods can modify the receiver; invoking
+them on a value would cause the method to receive a copy of the value, so
+any modifications would be discarded.
+The language therefore disallows this mistake.
+There is a handy exception, though. When the value is addressable, the
+language takes care of the common case of invoking a pointer method on a
+value by inserting the address operator automatically.
+In our example, the variable b
is addressable, so we can call
+its Write
method with just b.Write
. The compiler
+will rewrite that to (&b).Write
for us.
+
+By the way, the idea of using Write
on a slice of bytes
+is central to the implementation of bytes.Buffer
.
+
+Interfaces in Go provide a way to specify the behavior of an
+object: if something can do this, then it can be used
+here. We've seen a couple of simple examples already;
+custom printers can be implemented by a String
method
+while Fprintf
can generate output to anything
+with a Write
method.
+Interfaces with only one or two methods are common in Go code, and are
+usually given a name derived from the method, such as io.Writer
+for something that implements Write
.
+
+A type can implement multiple interfaces.
+For instance, a collection can be sorted
+by the routines in package sort
if it implements
+sort.Interface
, which contains Len()
,
+Less(i, j int) bool
, and Swap(i, j int)
,
+and it could also have a custom formatter.
+In this contrived example Sequence
satisfies both.
+
+The String
method of Sequence
is recreating the
+work that Sprint
already does for slices.
+(It also has complexity O(N²), which is poor.) We can share the
+effort (and also speed it up) if we convert the Sequence
to a plain
+[]int
before calling Sprint
.
+
+func (s Sequence) String() string { + s = s.Copy() + sort.Sort(s) + return fmt.Sprint([]int(s)) +} ++
+This method is another example of the conversion technique for calling
+Sprintf
safely from a String
method.
+Because the two types (Sequence
and []int
)
+are the same if we ignore the type name, it's legal to convert between them.
+The conversion doesn't create a new value, it just temporarily acts
+as though the existing value has a new type.
+(There are other legal conversions, such as from integer to floating point, that
+do create a new value.)
+
+It's an idiom in Go programs to convert the
+type of an expression to access a different
+set of methods. As an example, we could use the existing
+type sort.IntSlice
to reduce the entire example
+to this:
+
+type Sequence []int + +// Method for printing - sorts the elements before printing +func (s Sequence) String() string { + s = s.Copy() + sort.IntSlice(s).Sort() + return fmt.Sprint([]int(s)) +} ++
+Now, instead of having Sequence
implement multiple
+interfaces (sorting and printing), we're using the ability of a data item to be
+converted to multiple types (Sequence
, sort.IntSlice
+and []int
), each of which does some part of the job.
+That's more unusual in practice but can be effective.
+
+Type switches are a form of conversion: they take an interface and, for
+each case in the switch, in a sense convert it to the type of that case.
+Here's a simplified version of how the code under fmt.Printf
turns a value into
+a string using a type switch.
+If it's already a string, we want the actual string value held by the interface, while if it has a
+String
method we want the result of calling the method.
+
+type Stringer interface { + String() string +} + +var value interface{} // Value provided by caller. +switch str := value.(type) { +case string: + return str +case Stringer: + return str.String() +} ++ +
+The first case finds a concrete value; the second converts the interface into another interface. +It's perfectly fine to mix types this way. +
+ +
+What if there's only one type we care about? If we know the value holds a string
+and we just want to extract it?
+A one-case type switch would do, but so would a type assertion.
+A type assertion takes an interface value and extracts from it a value of the specified explicit type.
+The syntax borrows from the clause opening a type switch, but with an explicit
+type rather than the type
keyword:
+
+value.(typeName) ++ +
+and the result is a new value with the static type typeName
.
+That type must either be the concrete type held by the interface, or a second interface
+type that the value can be converted to.
+To extract the string we know is in the value, we could write:
+
+str := value.(string) ++ +
+But if it turns out that the value does not contain a string, the program will crash with a run-time error. +To guard against that, use the "comma, ok" idiom to test, safely, whether the value is a string: +
+ ++str, ok := value.(string) +if ok { + fmt.Printf("string value is: %q\n", str) +} else { + fmt.Printf("value is not a string\n") +} ++ +
+If the type assertion fails, str
will still exist and be of type string, but it will have
+the zero value, an empty string.
+
+As an illustration of the capability, here's an if
-else
+statement that's equivalent to the type switch that opened this section.
+
+if str, ok := value.(string); ok { + return str +} else if str, ok := value.(Stringer); ok { + return str.String() +} ++ +
+If a type exists only to implement an interface and will +never have exported methods beyond that interface, there is +no need to export the type itself. +Exporting just the interface makes it clear the value has no +interesting behavior beyond what is described in the +interface. +It also avoids the need to repeat the documentation +on every instance of a common method. +
+
+In such cases, the constructor should return an interface value
+rather than the implementing type.
+As an example, in the hash libraries
+both crc32.NewIEEE
and adler32.New
+return the interface type hash.Hash32
.
+Substituting the CRC-32 algorithm for Adler-32 in a Go program
+requires only changing the constructor call;
+the rest of the code is unaffected by the change of algorithm.
+
+A similar approach allows the streaming cipher algorithms
+in the various crypto
packages to be
+separated from the block ciphers they chain together.
+The Block
interface
+in the crypto/cipher
package specifies the
+behavior of a block cipher, which provides encryption
+of a single block of data.
+Then, by analogy with the bufio
package,
+cipher packages that implement this interface
+can be used to construct streaming ciphers, represented
+by the Stream
interface, without
+knowing the details of the block encryption.
+
+The crypto/cipher
interfaces look like this:
+
+type Block interface { + BlockSize() int + Encrypt(dst, src []byte) + Decrypt(dst, src []byte) +} + +type Stream interface { + XORKeyStream(dst, src []byte) +} ++ +
+Here's the definition of the counter mode (CTR) stream, +which turns a block cipher into a streaming cipher; notice +that the block cipher's details are abstracted away: +
+ ++// NewCTR returns a Stream that encrypts/decrypts using the given Block in +// counter mode. The length of iv must be the same as the Block's block size. +func NewCTR(block Block, iv []byte) Stream ++
+NewCTR
applies not
+just to one specific encryption algorithm and data source but to any
+implementation of the Block
interface and any
+Stream
. Because they return
+interface values, replacing CTR
+encryption with other encryption modes is a localized change. The constructor
+calls must be edited, but because the surrounding code must treat the result only
+as a Stream
, it won't notice the difference.
+
+Since almost anything can have methods attached, almost anything can
+satisfy an interface. One illustrative example is in the http
+package, which defines the Handler
interface. Any object
+that implements Handler
can serve HTTP requests.
+
+type Handler interface { + ServeHTTP(ResponseWriter, *Request) +} ++
+ResponseWriter
is itself an interface that provides access
+to the methods needed to return the response to the client.
+Those methods include the standard Write
method, so an
+http.ResponseWriter
can be used wherever an io.Writer
+can be used.
+Request
is a struct containing a parsed representation
+of the request from the client.
+
+For brevity, let's ignore POSTs and assume HTTP requests are always +GETs; that simplification does not affect the way the handlers are set up. +Here's a trivial implementation of a handler to count the number of times +the page is visited. +
++// Simple counter server. +type Counter struct { + n int +} + +func (ctr *Counter) ServeHTTP(w http.ResponseWriter, req *http.Request) { + ctr.n++ + fmt.Fprintf(w, "counter = %d\n", ctr.n) +} ++
+(Keeping with our theme, note how Fprintf
can print to an
+http.ResponseWriter
.)
+In a real server, access to ctr.n
would need protection from
+concurrent access.
+See the sync
and atomic
packages for suggestions.
+
+For reference, here's how to attach such a server to a node on the URL tree. +
++import "net/http" +... +ctr := new(Counter) +http.Handle("/counter", ctr) ++
+But why make Counter
a struct? An integer is all that's needed.
+(The receiver needs to be a pointer so the increment is visible to the caller.)
+
+// Simpler counter server. +type Counter int + +func (ctr *Counter) ServeHTTP(w http.ResponseWriter, req *http.Request) { + *ctr++ + fmt.Fprintf(w, "counter = %d\n", *ctr) +} ++
+What if your program has some internal state that needs to be notified that a page +has been visited? Tie a channel to the web page. +
++// A channel that sends a notification on each visit. +// (Probably want the channel to be buffered.) +type Chan chan *http.Request + +func (ch Chan) ServeHTTP(w http.ResponseWriter, req *http.Request) { + ch <- req + fmt.Fprint(w, "notification sent") +} ++
+Finally, let's say we wanted to present on /args
the arguments
+used when invoking the server binary.
+It's easy to write a function to print the arguments.
+
+func ArgServer() { + fmt.Println(os.Args) +} ++
+How do we turn that into an HTTP server? We could make ArgServer
+a method of some type whose value we ignore, but there's a cleaner way.
+Since we can define a method for any type except pointers and interfaces,
+we can write a method for a function.
+The http
package contains this code:
+
+// The HandlerFunc type is an adapter to allow the use of +// ordinary functions as HTTP handlers. If f is a function +// with the appropriate signature, HandlerFunc(f) is a +// Handler object that calls f. +type HandlerFunc func(ResponseWriter, *Request) + +// ServeHTTP calls f(w, req). +func (f HandlerFunc) ServeHTTP(w ResponseWriter, req *Request) { + f(w, req) +} ++
+HandlerFunc
is a type with a method, ServeHTTP
,
+so values of that type can serve HTTP requests. Look at the implementation
+of the method: the receiver is a function, f
, and the method
+calls f
. That may seem odd but it's not that different from, say,
+the receiver being a channel and the method sending on the channel.
+
+To make ArgServer
into an HTTP server, we first modify it
+to have the right signature.
+
+// Argument server. +func ArgServer(w http.ResponseWriter, req *http.Request) { + fmt.Fprintln(w, os.Args) +} ++
+ArgServer
now has same signature as HandlerFunc
,
+so it can be converted to that type to access its methods,
+just as we converted Sequence
to IntSlice
+to access IntSlice.Sort
.
+The code to set it up is concise:
+
+http.Handle("/args", http.HandlerFunc(ArgServer)) ++
+When someone visits the page /args
,
+the handler installed at that page has value ArgServer
+and type HandlerFunc
.
+The HTTP server will invoke the method ServeHTTP
+of that type, with ArgServer
as the receiver, which will in turn call
+ArgServer
(via the invocation f(w, req)
+inside HandlerFunc.ServeHTTP
).
+The arguments will then be displayed.
+
+In this section we have made an HTTP server from a struct, an integer, +a channel, and a function, all because interfaces are just sets of +methods, which can be defined for (almost) any type. +
+ +
+We've mentioned the blank identifier a couple of times now, in the context of
+for
range
loops
+and maps.
+The blank identifier can be assigned or declared with any value of any type, with the
+value discarded harmlessly.
+It's a bit like writing to the Unix /dev/null
file:
+it represents a write-only value
+to be used as a place-holder
+where a variable is needed but the actual value is irrelevant.
+It has uses beyond those we've seen already.
+
+The use of a blank identifier in a for
range
loop is a
+special case of a general situation: multiple assignment.
+
+If an assignment requires multiple values on the left side, +but one of the values will not be used by the program, +a blank identifier on the left-hand-side of +the assignment avoids the need +to create a dummy variable and makes it clear that the +value is to be discarded. +For instance, when calling a function that returns +a value and an error, but only the error is important, +use the blank identifier to discard the irrelevant value. +
+ ++if _, err := os.Stat(path); os.IsNotExist(err) { + fmt.Printf("%s does not exist\n", path) +} ++ +
+Occasionally you'll see code that discards the error value in order +to ignore the error; this is terrible practice. Always check error returns; +they're provided for a reason. +
+ ++// Bad! This code will crash if path does not exist. +fi, _ := os.Stat(path) +if fi.IsDir() { + fmt.Printf("%s is a directory\n", path) +} ++ +
+It is an error to import a package or to declare a variable without using it. +Unused imports bloat the program and slow compilation, +while a variable that is initialized but not used is at least +a wasted computation and perhaps indicative of a +larger bug. +When a program is under active development, however, +unused imports and variables often arise and it can +be annoying to delete them just to have the compilation proceed, +only to have them be needed again later. +The blank identifier provides a workaround. +
+
+This half-written program has two unused imports
+(fmt
and io
)
+and an unused variable (fd
),
+so it will not compile, but it would be nice to see if the
+code so far is correct.
+
+To silence complaints about the unused imports, use a
+blank identifier to refer to a symbol from the imported package.
+Similarly, assigning the unused variable fd
+to the blank identifier will silence the unused variable error.
+This version of the program does compile.
+
+By convention, the global declarations to silence import errors +should come right after the imports and be commented, +both to make them easy to find and as a reminder to clean things up later. +
+ +
+An unused import like fmt
or io
in the
+previous example should eventually be used or removed:
+blank assignments identify code as a work in progress.
+But sometimes it is useful to import a package only for its
+side effects, without any explicit use.
+For example, during its init
function,
+the net/http/pprof
+package registers HTTP handlers that provide
+debugging information. It has an exported API, but
+most clients need only the handler registration and
+access the data through a web page.
+To import the package only for its side effects, rename the package
+to the blank identifier:
+
+import _ "net/http/pprof" ++
+This form of import makes clear that the package is being +imported for its side effects, because there is no other possible +use of the package: in this file, it doesn't have a name. +(If it did, and we didn't use that name, the compiler would reject the program.) +
+ +
+As we saw in the discussion of interfaces above,
+a type need not declare explicitly that it implements an interface.
+Instead, a type implements the interface just by implementing the interface's methods.
+In practice, most interface conversions are static and therefore checked at compile time.
+For example, passing an *os.File
to a function
+expecting an io.Reader
will not compile unless
+*os.File
implements the io.Reader
interface.
+
+Some interface checks do happen at run-time, though.
+One instance is in the encoding/json
+package, which defines a Marshaler
+interface. When the JSON encoder receives a value that implements that interface,
+the encoder invokes the value's marshaling method to convert it to JSON
+instead of doing the standard conversion.
+The encoder checks this property at run time with a type assertion like:
+
+m, ok := val.(json.Marshaler) ++ +
+If it's necessary only to ask whether a type implements an interface, without +actually using the interface itself, perhaps as part of an error check, use the blank +identifier to ignore the type-asserted value: +
+ ++if _, ok := val.(json.Marshaler); ok { + fmt.Printf("value %v of type %T implements json.Marshaler\n", val, val) +} ++ +
+One place this situation arises is when it is necessary to guarantee within the package implementing the type that
+it actually satisfies the interface.
+If a type—for example,
+json.RawMessage
—needs
+a custom JSON representation, it should implement
+json.Marshaler
, but there are no static conversions that would
+cause the compiler to verify this automatically.
+If the type inadvertently fails to satisfy the interface, the JSON encoder will still work,
+but will not use the custom implementation.
+To guarantee that the implementation is correct,
+a global declaration using the blank identifier can be used in the package:
+
+var _ json.Marshaler = (*RawMessage)(nil) ++
+In this declaration, the assignment involving a conversion of a
+*RawMessage
to a Marshaler
+requires that *RawMessage
implements Marshaler
,
+and that property will be checked at compile time.
+Should the json.Marshaler
interface change, this package
+will no longer compile and we will be on notice that it needs to be updated.
+
+The appearance of the blank identifier in this construct indicates that +the declaration exists only for the type checking, +not to create a variable. +Don't do this for every type that satisfies an interface, though. +By convention, such declarations are only used +when there are no static conversions already present in the code, +which is a rare event. +
+ + ++Go does not provide the typical, type-driven notion of subclassing, +but it does have the ability to “borrow” pieces of an +implementation by embedding types within a struct or +interface. +
+
+Interface embedding is very simple.
+We've mentioned the io.Reader
and io.Writer
interfaces before;
+here are their definitions.
+
+type Reader interface { + Read(p []byte) (n int, err error) +} + +type Writer interface { + Write(p []byte) (n int, err error) +} ++
+The io
package also exports several other interfaces
+that specify objects that can implement several such methods.
+For instance, there is io.ReadWriter
, an interface
+containing both Read
and Write
.
+We could specify io.ReadWriter
by listing the
+two methods explicitly, but it's easier and more evocative
+to embed the two interfaces to form the new one, like this:
+
+// ReadWriter is the interface that combines the Reader and Writer interfaces. +type ReadWriter interface { + Reader + Writer +} ++
+This says just what it looks like: A ReadWriter
can do
+what a Reader
does and what a Writer
+does; it is a union of the embedded interfaces.
+Only interfaces can be embedded within interfaces.
+
+The same basic idea applies to structs, but with more far-reaching
+implications. The bufio
package has two struct types,
+bufio.Reader
and bufio.Writer
, each of
+which of course implements the analogous interfaces from package
+io
.
+And bufio
also implements a buffered reader/writer,
+which it does by combining a reader and a writer into one struct
+using embedding: it lists the types within the struct
+but does not give them field names.
+
+// ReadWriter stores pointers to a Reader and a Writer. +// It implements io.ReadWriter. +type ReadWriter struct { + *Reader // *bufio.Reader + *Writer // *bufio.Writer +} ++
+The embedded elements are pointers to structs and of course
+must be initialized to point to valid structs before they
+can be used.
+The ReadWriter
struct could be written as
+
+type ReadWriter struct { + reader *Reader + writer *Writer +} ++
+but then to promote the methods of the fields and to
+satisfy the io
interfaces, we would also need
+to provide forwarding methods, like this:
+
+func (rw *ReadWriter) Read(p []byte) (n int, err error) { + return rw.reader.Read(p) +} ++
+By embedding the structs directly, we avoid this bookkeeping.
+The methods of embedded types come along for free, which means that bufio.ReadWriter
+not only has the methods of bufio.Reader
and bufio.Writer
,
+it also satisfies all three interfaces:
+io.Reader
,
+io.Writer
, and
+io.ReadWriter
.
+
+There's an important way in which embedding differs from subclassing. When we embed a type,
+the methods of that type become methods of the outer type,
+but when they are invoked the receiver of the method is the inner type, not the outer one.
+In our example, when the Read
method of a bufio.ReadWriter
is
+invoked, it has exactly the same effect as the forwarding method written out above;
+the receiver is the reader
field of the ReadWriter
, not the
+ReadWriter
itself.
+
+Embedding can also be a simple convenience. +This example shows an embedded field alongside a regular, named field. +
++type Job struct { + Command string + *log.Logger +} ++
+The Job
type now has the Print
, Printf
, Println
+and other
+methods of *log.Logger
. We could have given the Logger
+a field name, of course, but it's not necessary to do so. And now, once
+initialized, we can
+log to the Job
:
+
+job.Println("starting now...") ++
+The Logger
is a regular field of the Job
struct,
+so we can initialize it in the usual way inside the constructor for Job
, like this,
+
+func NewJob(command string, logger *log.Logger) *Job { + return &Job{command, logger} +} ++
+or with a composite literal, +
++job := &Job{command, log.New(os.Stderr, "Job: ", log.Ldate)} ++
+If we need to refer to an embedded field directly, the type name of the field,
+ignoring the package qualifier, serves as a field name, as it did
+in the Read
method of our ReadWriter
struct.
+Here, if we needed to access the
+*log.Logger
of a Job
variable job
,
+we would write job.Logger
,
+which would be useful if we wanted to refine the methods of Logger
.
+
+func (job *Job) Printf(format string, args ...interface{}) { + job.Logger.Printf("%q: %s", job.Command, fmt.Sprintf(format, args...)) +} ++
+Embedding types introduces the problem of name conflicts but the rules to resolve
+them are simple.
+First, a field or method X
hides any other item X
in a more deeply
+nested part of the type.
+If log.Logger
contained a field or method called Command
, the Command
field
+of Job
would dominate it.
+
+Second, if the same name appears at the same nesting level, it is usually an error;
+it would be erroneous to embed log.Logger
if the Job
struct
+contained another field or method called Logger
.
+However, if the duplicate name is never mentioned in the program outside the type definition, it is OK.
+This qualification provides some protection against changes made to types embedded from outside; there
+is no problem if a field is added that conflicts with another field in another subtype if neither field
+is ever used.
+
+Concurrent programming is a large topic and there is space only for some +Go-specific highlights here. +
++Concurrent programming in many environments is made difficult by the +subtleties required to implement correct access to shared variables. Go encourages +a different approach in which shared values are passed around on channels +and, in fact, never actively shared by separate threads of execution. +Only one goroutine has access to the value at any given time. +Data races cannot occur, by design. +To encourage this way of thinking we have reduced it to a slogan: +
++Do not communicate by sharing memory; +instead, share memory by communicating. ++
+This approach can be taken too far. Reference counts may be best done +by putting a mutex around an integer variable, for instance. But as a +high-level approach, using channels to control access makes it easier +to write clear, correct programs. +
++One way to think about this model is to consider a typical single-threaded +program running on one CPU. It has no need for synchronization primitives. +Now run another such instance; it too needs no synchronization. Now let those +two communicate; if the communication is the synchronizer, there's still no need +for other synchronization. Unix pipelines, for example, fit this model +perfectly. Although Go's approach to concurrency originates in Hoare's +Communicating Sequential Processes (CSP), +it can also be seen as a type-safe generalization of Unix pipes. +
+ ++They're called goroutines because the existing +terms—threads, coroutines, processes, and so on—convey +inaccurate connotations. A goroutine has a simple model: it is a +function executing concurrently with other goroutines in the same +address space. It is lightweight, costing little more than the +allocation of stack space. +And the stacks start small, so they are cheap, and grow +by allocating (and freeing) heap storage as required. +
++Goroutines are multiplexed onto multiple OS threads so if one should +block, such as while waiting for I/O, others continue to run. Their +design hides many of the complexities of thread creation and +management. +
+
+Prefix a function or method call with the go
+keyword to run the call in a new goroutine.
+When the call completes, the goroutine
+exits, silently. (The effect is similar to the Unix shell's
+&
notation for running a command in the
+background.)
+
+go list.Sort() // run list.Sort concurrently; don't wait for it. ++
+A function literal can be handy in a goroutine invocation. +
++func Announce(message string, delay time.Duration) { + go func() { + time.Sleep(delay) + fmt.Println(message) + }() // Note the parentheses - must call the function. +} ++
+In Go, function literals are closures: the implementation makes +sure the variables referred to by the function survive as long as they are active. +
++These examples aren't too practical because the functions have no way of signaling +completion. For that, we need channels. +
+ +
+Like maps, channels are allocated with make
, and
+the resulting value acts as a reference to an underlying data structure.
+If an optional integer parameter is provided, it sets the buffer size for the channel.
+The default is zero, for an unbuffered or synchronous channel.
+
+ci := make(chan int) // unbuffered channel of integers +cj := make(chan int, 0) // unbuffered channel of integers +cs := make(chan *os.File, 100) // buffered channel of pointers to Files ++
+Unbuffered channels combine communication—the exchange of a value—with +synchronization—guaranteeing that two calculations (goroutines) are in +a known state. +
++There are lots of nice idioms using channels. Here's one to get us started. +In the previous section we launched a sort in the background. A channel +can allow the launching goroutine to wait for the sort to complete. +
++c := make(chan int) // Allocate a channel. +// Start the sort in a goroutine; when it completes, signal on the channel. +go func() { + list.Sort() + c <- 1 // Send a signal; value does not matter. +}() +doSomethingForAWhile() +<-c // Wait for sort to finish; discard sent value. ++
+Receivers always block until there is data to receive. +If the channel is unbuffered, the sender blocks until the receiver has +received the value. +If the channel has a buffer, the sender blocks only until the +value has been copied to the buffer; if the buffer is full, this +means waiting until some receiver has retrieved a value. +
+
+A buffered channel can be used like a semaphore, for instance to
+limit throughput. In this example, incoming requests are passed
+to handle
, which sends a value into the channel, processes
+the request, and then receives a value from the channel
+to ready the “semaphore” for the next consumer.
+The capacity of the channel buffer limits the number of
+simultaneous calls to process
.
+
+var sem = make(chan int, MaxOutstanding) + +func handle(r *Request) { + sem <- 1 // Wait for active queue to drain. + process(r) // May take a long time. + <-sem // Done; enable next request to run. +} + +func Serve(queue chan *Request) { + for { + req := <-queue + go handle(req) // Don't wait for handle to finish. + } +} ++ +
+Once MaxOutstanding
handlers are executing process
,
+any more will block trying to send into the filled channel buffer,
+until one of the existing handlers finishes and receives from the buffer.
+
+This design has a problem, though: Serve
+creates a new goroutine for
+every incoming request, even though only MaxOutstanding
+of them can run at any moment.
+As a result, the program can consume unlimited resources if the requests come in too fast.
+We can address that deficiency by changing Serve
to
+gate the creation of the goroutines.
+Here's an obvious solution, but beware it has a bug we'll fix subsequently:
+
+func Serve(queue chan *Request) { + for req := range queue { + sem <- 1 + go func() { + process(req) // Buggy; see explanation below. + <-sem + }() + } +}+ +
+The bug is that in a Go for
loop, the loop variable
+is reused for each iteration, so the req
+variable is shared across all goroutines.
+That's not what we want.
+We need to make sure that req
is unique for each goroutine.
+Here's one way to do that, passing the value of req
as an argument
+to the closure in the goroutine:
+
+func Serve(queue chan *Request) { + for req := range queue { + sem <- 1 + go func(req *Request) { + process(req) + <-sem + }(req) + } +}+ +
+Compare this version with the previous to see the difference in how +the closure is declared and run. +Another solution is just to create a new variable with the same +name, as in this example: +
+ ++func Serve(queue chan *Request) { + for req := range queue { + req := req // Create new instance of req for the goroutine. + sem <- 1 + go func() { + process(req) + <-sem + }() + } +}+ +
+It may seem odd to write +
+ ++req := req ++ +
+but it's legal and idiomatic in Go to do this. +You get a fresh version of the variable with the same name, deliberately +shadowing the loop variable locally but unique to each goroutine. +
+ +
+Going back to the general problem of writing the server,
+another approach that manages resources well is to start a fixed
+number of handle
goroutines all reading from the request
+channel.
+The number of goroutines limits the number of simultaneous
+calls to process
.
+This Serve
function also accepts a channel on which
+it will be told to exit; after launching the goroutines it blocks
+receiving from that channel.
+
+func handle(queue chan *Request) { + for r := range queue { + process(r) + } +} + +func Serve(clientRequests chan *Request, quit chan bool) { + // Start handlers + for i := 0; i < MaxOutstanding; i++ { + go handle(clientRequests) + } + <-quit // Wait to be told to exit. +} ++ +
+One of the most important properties of Go is that +a channel is a first-class value that can be allocated and passed +around like any other. A common use of this property is +to implement safe, parallel demultiplexing. +
+
+In the example in the previous section, handle
was
+an idealized handler for a request but we didn't define the
+type it was handling. If that type includes a channel on which
+to reply, each client can provide its own path for the answer.
+Here's a schematic definition of type Request
.
+
+type Request struct { + args []int + f func([]int) int + resultChan chan int +} ++
+The client provides a function and its arguments, as well as +a channel inside the request object on which to receive the answer. +
++func sum(a []int) (s int) { + for _, v := range a { + s += v + } + return +} + +request := &Request{[]int{3, 4, 5}, sum, make(chan int)} +// Send request +clientRequests <- request +// Wait for response. +fmt.Printf("answer: %d\n", <-request.resultChan) ++
+On the server side, the handler function is the only thing that changes. +
++func handle(queue chan *Request) { + for req := range queue { + req.resultChan <- req.f(req.args) + } +} ++
+There's clearly a lot more to do to make it realistic, but this +code is a framework for a rate-limited, parallel, non-blocking RPC +system, and there's not a mutex in sight. +
+ ++Another application of these ideas is to parallelize a calculation +across multiple CPU cores. If the calculation can be broken into +separate pieces that can execute independently, it can be parallelized, +with a channel to signal when each piece completes. +
++Let's say we have an expensive operation to perform on a vector of items, +and that the value of the operation on each item is independent, +as in this idealized example. +
++type Vector []float64 + +// Apply the operation to v[i], v[i+1] ... up to v[n-1]. +func (v Vector) DoSome(i, n int, u Vector, c chan int) { + for ; i < n; i++ { + v[i] += u.Op(v[i]) + } + c <- 1 // signal that this piece is done +} ++
+We launch the pieces independently in a loop, one per CPU. +They can complete in any order but it doesn't matter; we just +count the completion signals by draining the channel after +launching all the goroutines. +
++const numCPU = 4 // number of CPU cores + +func (v Vector) DoAll(u Vector) { + c := make(chan int, numCPU) // Buffering optional but sensible. + for i := 0; i < numCPU; i++ { + go v.DoSome(i*len(v)/numCPU, (i+1)*len(v)/numCPU, u, c) + } + // Drain the channel. + for i := 0; i < numCPU; i++ { + <-c // wait for one task to complete + } + // All done. +} ++
+Rather than create a constant value for numCPU, we can ask the runtime what
+value is appropriate.
+The function runtime.NumCPU
+returns the number of hardware CPU cores in the machine, so we could write
+
+var numCPU = runtime.NumCPU() ++
+There is also a function
+runtime.GOMAXPROCS
,
+which reports (or sets)
+the user-specified number of cores that a Go program can have running
+simultaneously.
+It defaults to the value of runtime.NumCPU
but can be
+overridden by setting the similarly named shell environment variable
+or by calling the function with a positive number. Calling it with
+zero just queries the value.
+Therefore if we want to honor the user's resource request, we should write
+
+var numCPU = runtime.GOMAXPROCS(0) ++
+Be sure not to confuse the ideas of concurrency—structuring a program +as independently executing components—and parallelism—executing +calculations in parallel for efficiency on multiple CPUs. +Although the concurrency features of Go can make some problems easy +to structure as parallel computations, Go is a concurrent language, +not a parallel one, and not all parallelization problems fit Go's model. +For a discussion of the distinction, see the talk cited in +this +blog post. + +
+The tools of concurrent programming can even make non-concurrent
+ideas easier to express. Here's an example abstracted from an RPC
+package. The client goroutine loops receiving data from some source,
+perhaps a network. To avoid allocating and freeing buffers, it keeps
+a free list, and uses a buffered channel to represent it. If the
+channel is empty, a new buffer gets allocated.
+Once the message buffer is ready, it's sent to the server on
+serverChan
.
+
+var freeList = make(chan *Buffer, 100) +var serverChan = make(chan *Buffer) + +func client() { + for { + var b *Buffer + // Grab a buffer if available; allocate if not. + select { + case b = <-freeList: + // Got one; nothing more to do. + default: + // None free, so allocate a new one. + b = new(Buffer) + } + load(b) // Read next message from the net. + serverChan <- b // Send to server. + } +} ++
+The server loop receives each message from the client, processes it, +and returns the buffer to the free list. +
++func server() { + for { + b := <-serverChan // Wait for work. + process(b) + // Reuse buffer if there's room. + select { + case freeList <- b: + // Buffer on free list; nothing more to do. + default: + // Free list full, just carry on. + } + } +} ++
+The client attempts to retrieve a buffer from freeList
;
+if none is available, it allocates a fresh one.
+The server's send to freeList
puts b
back
+on the free list unless the list is full, in which case the
+buffer is dropped on the floor to be reclaimed by
+the garbage collector.
+(The default
clauses in the select
+statements execute when no other case is ready,
+meaning that the selects
never block.)
+This implementation builds a leaky bucket free list
+in just a few lines, relying on the buffered channel and
+the garbage collector for bookkeeping.
+
+Library routines must often return some sort of error indication to
+the caller.
+As mentioned earlier, Go's multivalue return makes it
+easy to return a detailed error description alongside the normal
+return value.
+It is good style to use this feature to provide detailed error information.
+For example, as we'll see, os.Open
doesn't
+just return a nil
pointer on failure, it also returns an
+error value that describes what went wrong.
+
+By convention, errors have type error
,
+a simple built-in interface.
+
+type error interface { + Error() string +} ++
+A library writer is free to implement this interface with a
+richer model under the covers, making it possible not only
+to see the error but also to provide some context.
+As mentioned, alongside the usual *os.File
+return value, os.Open
also returns an
+error value.
+If the file is opened successfully, the error will be nil
,
+but when there is a problem, it will hold an
+os.PathError
:
+
+// PathError records an error and the operation and +// file path that caused it. +type PathError struct { + Op string // "open", "unlink", etc. + Path string // The associated file. + Err error // Returned by the system call. +} + +func (e *PathError) Error() string { + return e.Op + " " + e.Path + ": " + e.Err.Error() +} ++
+PathError
's Error
generates
+a string like this:
+
+open /etc/passwx: no such file or directory ++
+Such an error, which includes the problematic file name, the +operation, and the operating system error it triggered, is useful even +if printed far from the call that caused it; +it is much more informative than the plain +"no such file or directory". +
+ +
+When feasible, error strings should identify their origin, such as by having
+a prefix naming the operation or package that generated the error. For example, in package
+image
, the string representation for a decoding error due to an
+unknown format is "image: unknown format".
+
+Callers that care about the precise error details can
+use a type switch or a type assertion to look for specific
+errors and extract details. For PathErrors
+this might include examining the internal Err
+field for recoverable failures.
+
+for try := 0; try < 2; try++ { + file, err = os.Create(filename) + if err == nil { + return + } + if e, ok := err.(*os.PathError); ok && e.Err == syscall.ENOSPC { + deleteTempFiles() // Recover some space. + continue + } + return +} ++ +
+The second if
statement here is another type assertion.
+If it fails, ok
will be false, and e
+will be nil
.
+If it succeeds, ok
will be true, which means the
+error was of type *os.PathError
, and then so is e
,
+which we can examine for more information about the error.
+
+The usual way to report an error to a caller is to return an
+error
as an extra return value. The canonical
+Read
method is a well-known instance; it returns a byte
+count and an error
. But what if the error is
+unrecoverable? Sometimes the program simply cannot continue.
+
+For this purpose, there is a built-in function panic
+that in effect creates a run-time error that will stop the program
+(but see the next section). The function takes a single argument
+of arbitrary type—often a string—to be printed as the
+program dies. It's also a way to indicate that something impossible has
+happened, such as exiting an infinite loop.
+
+// A toy implementation of cube root using Newton's method. +func CubeRoot(x float64) float64 { + z := x/3 // Arbitrary initial value + for i := 0; i < 1e6; i++ { + prevz := z + z -= (z*z*z-x) / (3*z*z) + if veryClose(z, prevz) { + return z + } + } + // A million iterations has not converged; something is wrong. + panic(fmt.Sprintf("CubeRoot(%g) did not converge", x)) +} ++ +
+This is only an example but real library functions should
+avoid panic
. If the problem can be masked or worked
+around, it's always better to let things continue to run rather
+than taking down the whole program. One possible counterexample
+is during initialization: if the library truly cannot set itself up,
+it might be reasonable to panic, so to speak.
+
+var user = os.Getenv("USER") + +func init() { + if user == "" { + panic("no value for $USER") + } +} ++ +
+When panic
is called, including implicitly for run-time
+errors such as indexing a slice out of bounds or failing a type
+assertion, it immediately stops execution of the current function
+and begins unwinding the stack of the goroutine, running any deferred
+functions along the way. If that unwinding reaches the top of the
+goroutine's stack, the program dies. However, it is possible to
+use the built-in function recover
to regain control
+of the goroutine and resume normal execution.
+
+A call to recover
stops the unwinding and returns the
+argument passed to panic
. Because the only code that
+runs while unwinding is inside deferred functions, recover
+is only useful inside deferred functions.
+
+One application of recover
is to shut down a failing goroutine
+inside a server without killing the other executing goroutines.
+
+func server(workChan <-chan *Work) { + for work := range workChan { + go safelyDo(work) + } +} + +func safelyDo(work *Work) { + defer func() { + if err := recover(); err != nil { + log.Println("work failed:", err) + } + }() + do(work) +} ++ +
+In this example, if do(work)
panics, the result will be
+logged and the goroutine will exit cleanly without disturbing the
+others. There's no need to do anything else in the deferred closure;
+calling recover
handles the condition completely.
+
+Because recover
always returns nil
unless called directly
+from a deferred function, deferred code can call library routines that themselves
+use panic
and recover
without failing. As an example,
+the deferred function in safelyDo
might call a logging function before
+calling recover
, and that logging code would run unaffected
+by the panicking state.
+
+With our recovery pattern in place, the do
+function (and anything it calls) can get out of any bad situation
+cleanly by calling panic
. We can use that idea to
+simplify error handling in complex software. Let's look at an
+idealized version of a regexp
package, which reports
+parsing errors by calling panic
with a local
+error type. Here's the definition of Error
,
+an error
method, and the Compile
function.
+
+// Error is the type of a parse error; it satisfies the error interface. +type Error string +func (e Error) Error() string { + return string(e) +} + +// error is a method of *Regexp that reports parsing errors by +// panicking with an Error. +func (regexp *Regexp) error(err string) { + panic(Error(err)) +} + +// Compile returns a parsed representation of the regular expression. +func Compile(str string) (regexp *Regexp, err error) { + regexp = new(Regexp) + // doParse will panic if there is a parse error. + defer func() { + if e := recover(); e != nil { + regexp = nil // Clear return value. + err = e.(Error) // Will re-panic if not a parse error. + } + }() + return regexp.doParse(str), nil +} ++ +
+If doParse
panics, the recovery block will set the
+return value to nil
—deferred functions can modify
+named return values. It will then check, in the assignment
+to err
, that the problem was a parse error by asserting
+that it has the local type Error
.
+If it does not, the type assertion will fail, causing a run-time error
+that continues the stack unwinding as though nothing had interrupted
+it.
+This check means that if something unexpected happens, such
+as an index out of bounds, the code will fail even though we
+are using panic
and recover
to handle
+parse errors.
+
+With error handling in place, the error
method (because it's a
+method bound to a type, it's fine, even natural, for it to have the same name
+as the builtin error
type)
+makes it easy to report parse errors without worrying about unwinding
+the parse stack by hand:
+
+if pos == 0 { + re.error("'*' illegal at start of expression") +} ++ +
+Useful though this pattern is, it should be used only within a package.
+Parse
turns its internal panic
calls into
+error
values; it does not expose panics
+to its client. That is a good rule to follow.
+
+By the way, this re-panic idiom changes the panic value if an actual +error occurs. However, both the original and new failures will be +presented in the crash report, so the root cause of the problem will +still be visible. Thus this simple re-panic approach is usually +sufficient—it's a crash after all—but if you want to +display only the original value, you can write a little more code to +filter unexpected problems and re-panic with the original error. +That's left as an exercise for the reader. +
+ + +
+Let's finish with a complete Go program, a web server.
+This one is actually a kind of web re-server.
+Google provides a service at chart.apis.google.com
+that does automatic formatting of data into charts and graphs.
+It's hard to use interactively, though,
+because you need to put the data into the URL as a query.
+The program here provides a nicer interface to one form of data: given a short piece of text,
+it calls on the chart server to produce a QR code, a matrix of boxes that encode the
+text.
+That image can be grabbed with your cell phone's camera and interpreted as,
+for instance, a URL, saving you typing the URL into the phone's tiny keyboard.
+
+Here's the complete program. +An explanation follows. +
+{{code "/doc/progs/eff_qr.go" `/package/` `$`}} +
+The pieces up to main
should be easy to follow.
+The one flag sets a default HTTP port for our server. The template
+variable templ
is where the fun happens. It builds an HTML template
+that will be executed by the server to display the page; more about
+that in a moment.
+
+The main
function parses the flags and, using the mechanism
+we talked about above, binds the function QR
to the root path
+for the server. Then http.ListenAndServe
is called to start the
+server; it blocks while the server runs.
+
+QR
just receives the request, which contains form data, and
+executes the template on the data in the form value named s
.
+
+The template package html/template
is powerful;
+this program just touches on its capabilities.
+In essence, it rewrites a piece of HTML text on the fly by substituting elements derived
+from data items passed to templ.Execute
, in this case the
+form value.
+Within the template text (templateStr
),
+double-brace-delimited pieces denote template actions.
+The piece from {{html "{{if .}}"}}
+to {{html "{{end}}"}}
executes only if the value of the current data item, called .
(dot),
+is non-empty.
+That is, when the string is empty, this piece of the template is suppressed.
+
+The two snippets {{html "{{.}}"}}
say to show the data presented to
+the template—the query string—on the web page.
+The HTML template package automatically provides appropriate escaping so the
+text is safe to display.
+
+The rest of the template string is just the HTML to show when the page loads. +If this is too quick an explanation, see the documentation +for the template package for a more thorough discussion. +
++And there you have it: a useful web server in a few lines of code plus some +data-driven HTML text. +Go is powerful enough to make a lot happen in a few lines. +
+ + diff --git a/_content/doc/gccgo_contribute.html b/_content/doc/gccgo_contribute.html new file mode 100644 index 00000000..395902d7 --- /dev/null +++ b/_content/doc/gccgo_contribute.html @@ -0,0 +1,112 @@ + + ++These are some notes on contributing to the gccgo frontend for GCC. +For information on contributing to parts of Go other than gccgo, +see Contributing to the Go project. For +information on building gccgo for yourself, +see Setting up and using gccgo. +For more of the gritty details on the process of doing development +with the gccgo frontend, +see the +file HACKING in the gofrontend repository. +
+ ++You must follow the Go copyright +rules for all changes to the gccgo frontend and the associated +libgo library. Code that is part of GCC rather than gccgo must follow +the general GCC +contribution rules. +
+ +
+The master sources for the gccgo frontend may be found at
+https://go.googlesource.com/gofrontend.
+They are mirrored
+at https://github.com/golang/gofrontend.
+The master sources are not buildable by themselves, but only in
+conjunction with GCC (in the future, other compilers may be
+supported). Changes made to the gccgo frontend are also applied to
+the GCC source code repository hosted at gcc.gnu.org
. In
+the gofrontend
repository, the go
directory
+is mirrored to the gcc/go/gofrontend
directory in the GCC
+repository, and the gofrontend
libgo
+directory is mirrored to the GCC libgo
directory. In
+addition, the test
directory
+from the main Go repository
+is mirrored to the gcc/testsuite/go.test/test
directory
+in the GCC repository.
+
+Changes to these directories always flow from the master sources to +the GCC repository. The files should never be changed in the GCC +repository except by changing them in the master sources and mirroring +them. +
+ +
+The gccgo frontend is written in C++.
+It follows the GNU and GCC coding standards for C++.
+In writing code for the frontend, follow the formatting of the
+surrounding code.
+Almost all GCC-specific code is not in the frontend proper and is
+instead in the GCC sources in the gcc/go
directory.
+
+The run-time library for gccgo is mostly the same as the library
+in the main Go repository.
+The library code in the Go repository is periodically merged into
+the libgo/go
directory of the gofrontend
and
+then the GCC repositories, using the shell
+script libgo/merge.sh
. Accordingly, most library changes
+should be made in the main Go repository. The files outside
+of libgo/go
are gccgo-specific; that said, some of the
+files in libgo/runtime
are based on files
+in src/runtime
in the main Go repository.
+
+All patches must be tested. A patch that introduces new failures is +not acceptable. +
+ +
+To run the gccgo test suite, run make check-go
in your
+build directory. This will run various tests
+under gcc/testsuite/go.*
and will also run
+the libgo
testsuite. This copy of the tests from the
+main Go repository is run using the DejaGNU script found
+in gcc/testsuite/go.test/go-test.exp
.
+
+Most new tests should be submitted to the main Go repository for later
+mirroring into the GCC repository. If there is a need for specific
+tests for gccgo, they should go in
+the gcc/testsuite/go.go-torture
+or gcc/testsuite/go.dg
directories in the GCC repository.
+
+Changes to the Go frontend should follow the same process as for the
+main Go repository, only for the gofrontend
project and
+the gofrontend-dev@googlegroups.com
mailing list
+rather than the go
project and the
+golang-dev@googlegroups.com
mailing list. Those changes
+will then be merged into the GCC sources.
+
+This document explains how to use gccgo, a compiler for +the Go language. The gccgo compiler is a new frontend +for GCC, the widely used GNU compiler. Although the +frontend itself is under a BSD-style license, gccgo is +normally used as part of GCC and is then covered by +the GNU General Public +License (the license covers gccgo itself as part of GCC; it +does not cover code generated by gccgo). +
+ +
+Note that gccgo is not the gc
compiler; see
+the Installing Go instructions for that
+compiler.
+
+The simplest way to install gccgo is to install a GCC binary release +built to include Go support. GCC binary releases are available from +various +websites and are typically included as part of GNU/Linux +distributions. We expect that most people who build these binaries +will include Go support. +
+ ++The GCC 4.7.1 release and all later 4.7 releases include a complete +Go 1 compiler and libraries. +
+ ++Due to timing, the GCC 4.8.0 and 4.8.1 releases are close to but not +identical to Go 1.1. The GCC 4.8.2 release includes a complete Go +1.1.2 implementation. +
+ ++The GCC 4.9 releases include a complete Go 1.2 implementation. +
+ ++The GCC 5 releases include a complete implementation of the Go 1.4 +user libraries. The Go 1.4 runtime is not fully merged, but that +should not be visible to Go programs. +
+ ++The GCC 6 releases include a complete implementation of the Go 1.6.1 +user libraries. The Go 1.6 runtime is not fully merged, but that +should not be visible to Go programs. +
+ ++The GCC 7 releases include a complete implementation of the Go 1.8.1 +user libraries. As with earlier releases, the Go 1.8 runtime is not +fully merged, but that should not be visible to Go programs. +
+ ++The GCC 8 releases include a complete implementation of the Go 1.10.1 +release. The Go 1.10 runtime has now been fully merged into the GCC +development sources, and concurrent garbage collection is fully +supported. +
+ ++The GCC 9 releases include a complete implementation of the Go 1.12.2 +release. +
+ +
+If you cannot use a release, or prefer to build gccgo for yourself, the
+gccgo source code is accessible via Git. The GCC web site has
+instructions for getting the GCC
+source code. The gccgo source code is included. As a convenience, a
+stable version of the Go support is available in the
+devel/gccgo
branch of the main GCC code repository:
+git://gcc.gnu.org/git/gcc.git
.
+This branch is periodically updated with stable Go compiler sources.
+
+Note that although gcc.gnu.org
is the most convenient way
+to get the source code for the Go frontend, it is not where the master
+sources live. If you want to contribute changes to the Go frontend
+compiler, see Contributing to
+gccgo.
+
+Building gccgo is just like building GCC
+with one or two additional options. See
+the instructions on the gcc web
+site. When you run configure
, add the
+option --enable-languages=c,c++,go
(along with other
+languages you may want to build). If you are targeting a 32-bit x86,
+then you will want to build gccgo to default to
+supporting locked compare and exchange instructions; do this by also
+using the configure
option --with-arch=i586
+(or a newer architecture, depending on where you need your programs to
+run). If you are targeting a 64-bit x86, but sometimes want to use
+the -m32
option, then use the configure
+option --with-arch-32=i586
.
+
+On x86 GNU/Linux systems the gccgo compiler is able to +use a small discontiguous stack for goroutines. This permits programs +to run many more goroutines, since each goroutine can use a relatively +small stack. Doing this requires using the gold linker version 2.22 +or later. You can either install GNU binutils 2.22 or later, or you +can build gold yourself. +
+ +
+To build gold yourself, build the GNU binutils,
+using --enable-gold=default
when you run
+the configure
script. Before building, you must install
+the flex and bison packages. A typical sequence would look like
+this (you can replace /opt/gold
with any directory to
+which you have write access):
+
+git clone git://sourceware.org/git/binutils-gdb.git +mkdir binutils-objdir +cd binutils-objdir +../binutils-gdb/configure --enable-gold=default --prefix=/opt/gold +make +make install ++ +
+However you install gold, when you configure gccgo, use the
+option --with-ld=GOLD_BINARY
.
+
+A number of prerequisites are required to build GCC, as
+described on
+the gcc web
+site. It is important to install all the prerequisites before
+running the gcc configure
script.
+The prerequisite libraries can be conveniently downloaded using the
+script contrib/download_prerequisites
in the GCC sources.
+
+
+Once all the prerequisites are installed, then a typical build and
+install sequence would look like this (only use
+the --with-ld
option if you are using the gold linker as
+described above):
+
+git clone --branch devel/gccgo git://gcc.gnu.org/git/gcc.git gccgo +mkdir objdir +cd objdir +../gccgo/configure --prefix=/opt/gccgo --enable-languages=c,c++,go --with-ld=/opt/gold/bin/ld +make +make install ++ +
+The gccgo compiler works like other gcc frontends. As of GCC 5 the gccgo
+installation also includes a version of the go
command,
+which may be used to build Go programs as described at
+https://golang.org/cmd/go.
+
+To compile a file without using the go
command:
+
+gccgo -c file.go ++ +
+That produces file.o
. To link files together to form an
+executable:
+
+gccgo -o file file.o ++ +
+To run the resulting file, you will need to tell the program where to +find the compiled Go packages. There are a few ways to do this: +
+ +
+Set the LD_LIBRARY_PATH
environment variable:
+
+LD_LIBRARY_PATH=${prefix}/lib/gcc/MACHINE/VERSION +[or] +LD_LIBRARY_PATH=${prefix}/lib64/gcc/MACHINE/VERSION +export LD_LIBRARY_PATH ++ +
+Here ${prefix}
is the --prefix
option used
+when building gccgo. For a binary install this is
+normally /usr
. Whether to use lib
+or lib64
depends on the target.
+Typically lib64
is correct for x86_64 systems,
+and lib
is correct for other systems. The idea is to
+name the directory where libgo.so
is found.
+
+Passing a -Wl,-R
option when you link (replace lib with
+lib64 if appropriate for your system):
+
+go build -gccgoflags -Wl,-R,${prefix}/lib/gcc/MACHINE/VERSION +[or] +gccgo -o file file.o -Wl,-R,${prefix}/lib/gcc/MACHINE/VERSION ++
+Use the -static-libgo
option to link statically against
+the compiled packages.
+
+Use the -static
option to do a fully static link (the
+default for the gc
compiler).
+
+The gccgo compiler supports all GCC options
+that are language independent, notably the -O
+and -g
options.
+
+The -fgo-pkgpath=PKGPATH
option may be used to set a
+unique prefix for the package being compiled.
+This option is automatically used by the go command, but you may want
+to use it if you invoke gccgo directly.
+This option is intended for use with large
+programs that contain many packages, in order to allow multiple
+packages to use the same identifier as the package name.
+The PKGPATH
may be any string; a good choice for the
+string is the path used to import the package.
+
+The -I
and -L
options, which are synonyms
+for the compiler, may be used to set the search path for finding
+imports.
+These options are not needed if you build with the go command.
+
+When you compile a file that exports something, the export +information will be stored directly in the object file. +If you build with gccgo directly, rather than with the go command, +then when you import a package, you must tell gccgo how to find the +file. +
+ ++When you import the package FILE with gccgo, +it will look for the import data in the following files, and use the +first one that it finds. + +
FILE.gox
+libFILE.so
+libFILE.a
+FILE.o
+
+FILE.gox
, when used, will typically contain
+nothing but export data. This can be generated from
+FILE.o
via
+
+objcopy -j .go_export FILE.o FILE.gox ++ +
+The gccgo compiler will look in the current
+directory for import files. In more complex scenarios you
+may pass the -I
or -L
option to
+gccgo. Both options take directories to search. The
+-L
option is also passed to the linker.
+
+The gccgo compiler does not currently (2015-06-15) record +the file name of imported packages in the object file. You must +arrange for the imported data to be linked into the program. +Again, this is not necessary when building with the go command. +
+ ++gccgo -c mypackage.go # Exports mypackage +gccgo -c main.go # Imports mypackage +gccgo -o main main.o mypackage.o # Explicitly links with mypackage.o ++ +
+If you use the -g
option when you compile, you can run
+gdb
on your executable. The debugger has only limited
+knowledge about Go. You can set breakpoints, single-step,
+etc. You can print variables, but they will be printed as though they
+had C/C++ types. For numeric types this doesn't matter. Go strings
+and interfaces will show up as two-element structures. Go
+maps and channels are always represented as C pointers to run-time
+structures.
+
+When using gccgo there is limited interoperability with C,
+or with C++ code compiled using extern "C"
.
+
+Basic types map directly: an int32
in Go is
+an int32_t
in C, an int64
is
+an int64_t
, etc.
+The Go type int
is an integer that is the same size as a
+pointer, and as such corresponds to the C type intptr_t
.
+Go byte
is equivalent to C unsigned char
.
+Pointers in Go are pointers in C.
+A Go struct
is the same as C struct
with the
+same fields and types.
+
+The Go string
type is currently defined as a two-element
+structure (this is subject to change):
+
+struct __go_string { + const unsigned char *__data; + intptr_t __length; +}; ++ +
+You can't pass arrays between C and Go. However, a pointer to an
+array in Go is equivalent to a C pointer to the
+equivalent of the element type.
+For example, Go *[10]int
is equivalent to C int*
,
+assuming that the C pointer does point to 10 elements.
+
+A slice in Go is a structure. The current definition is +(this is subject to change): +
+ ++struct __go_slice { + void *__values; + intptr_t __count; + intptr_t __capacity; +}; ++ +
+The type of a Go function is a pointer to a struct (this is +subject to change). The first field in the +struct points to the code of the function, which will be equivalent to +a pointer to a C function whose parameter types are equivalent, with +an additional trailing parameter. The trailing parameter is the +closure, and the argument to pass is a pointer to the Go function +struct. + +When a Go function returns more than one value, the C function returns +a struct. For example, these functions are roughly equivalent: +
+ ++func GoFunction(int) (int, float64) +struct { int i; float64 f; } CFunction(int, void*) ++ +
+Go interface
, channel
, and map
+types have no corresponding C type (interface
is a
+two-element struct and channel
and map
are
+pointers to structs in C, but the structs are deliberately undocumented). C
+enum
types correspond to some integer type, but precisely
+which one is difficult to predict in general; use a cast. C union
+types have no corresponding Go type. C struct
types containing
+bitfields have no corresponding Go type. C++ class
types have
+no corresponding Go type.
+
+Memory allocation is completely different between C and Go, as Go uses +garbage collection. The exact guidelines in this area are undetermined, +but it is likely that it will be permitted to pass a pointer to allocated +memory from C to Go. The responsibility of eventually freeing the pointer +will remain with C side, and of course if the C side frees the pointer +while the Go side still has a copy the program will fail. When passing a +pointer from Go to C, the Go function must retain a visible copy of it in +some Go variable. Otherwise the Go garbage collector may delete the +pointer while the C function is still using it. +
+ +
+Go code can call C functions directly using a Go extension implemented
+in gccgo: a function declaration may be preceded by
+//extern NAME
. For example, here is how the C function
+open
can be declared in Go:
+
+//extern open +func c_open(name *byte, mode int, perm int) int ++ +
+The C function naturally expects a NUL-terminated string, which in
+Go is equivalent to a pointer to an array (not a slice!) of
+byte
with a terminating zero byte. So a sample call
+from Go would look like (after importing the syscall
package):
+
+var name = [4]byte{'f', 'o', 'o', 0}; +i := c_open(&name[0], syscall.O_RDONLY, 0); ++ +
+(this serves as an example only, to open a file in Go please use Go's
+os.Open
function instead).
+
+Note that if the C function can block, such as in a call
+to read
, calling the C function may block the Go program.
+Unless you have a clear understanding of what you are doing, all calls
+between C and Go should be implemented through cgo or SWIG, as for
+the gc
compiler.
+
+The name of Go functions accessed from C is subject to change. At present
+the name of a Go function that does not have a receiver is
+prefix.package.Functionname
. The prefix is set by
+the -fgo-prefix
option used when the package is compiled;
+if the option is not used, the default is go
.
+To call the function from C you must set the name using
+a GCC extension.
+
+extern int go_function(int) __asm__ ("myprefix.mypackage.Function"); ++ +
+The Go version of GCC supports automatically generating
+Go declarations from C code. The facility is rather awkward, and most
+users should use the cgo program with
+the -gccgo
option instead.
+
+Compile your C code as usual, and add the option
+-fdump-go-spec=FILENAME
. This will create the
+file FILENAME
as a side effect of the
+compilation. This file will contain Go declarations for the types,
+variables and functions declared in the C code. C types that can not
+be represented in Go will be recorded as comments in the Go code. The
+generated file will not have a package
declaration, but
+can otherwise be compiled directly by gccgo.
+
+This procedure is full of unstated caveats and restrictions and we make no +guarantee that it will not change in the future. It is more useful as a +starting point for real Go code than as a regular procedure. +
diff --git a/_content/doc/go-logo-black.png b/_content/doc/go-logo-black.png new file mode 100644 index 00000000..3077ebda Binary files /dev/null and b/_content/doc/go-logo-black.png differ diff --git a/_content/doc/go-logo-blue.png b/_content/doc/go-logo-blue.png new file mode 100644 index 00000000..8d43a567 Binary files /dev/null and b/_content/doc/go-logo-blue.png differ diff --git a/_content/doc/go-logo-white.png b/_content/doc/go-logo-white.png new file mode 100644 index 00000000..fa29169f Binary files /dev/null and b/_content/doc/go-logo-white.png differ diff --git a/_content/doc/go1.1.html b/_content/doc/go1.1.html new file mode 100644 index 00000000..f615c97e --- /dev/null +++ b/_content/doc/go1.1.html @@ -0,0 +1,1099 @@ + + ++The release of Go version 1 (Go 1 or Go 1.0 for short) +in March of 2012 introduced a new period +of stability in the Go language and libraries. +That stability has helped nourish a growing community of Go users +and systems around the world. +Several "point" releases since +then—1.0.1, 1.0.2, and 1.0.3—have been issued. +These point releases fixed known bugs but made +no non-critical changes to the implementation. +
+ ++This new release, Go 1.1, keeps the promise +of compatibility but adds a couple of significant +(backwards-compatible, of course) language changes, has a long list +of (again, compatible) library changes, and +includes major work on the implementation of the compilers, +libraries, and run-time. +The focus is on performance. +Benchmarking is an inexact science at best, but we see significant, +sometimes dramatic speedups for many of our test programs. +We trust that many of our users' programs will also see improvements +just by updating their Go installation and recompiling. +
+ ++This document summarizes the changes between Go 1 and Go 1.1. +Very little if any code will need modification to run with Go 1.1, +although a couple of rare error cases surface with this release +and need to be addressed if they arise. +Details appear below; see the discussion of +64-bit ints and Unicode literals +in particular. +
+ ++The Go compatibility document promises +that programs written to the Go 1 language specification will continue to operate, +and those promises are maintained. +In the interest of firming up the specification, though, there are +details about some error cases that have been clarified. +There are also some new language features. +
+ ++In Go 1, integer division by a constant zero produced a run-time panic: +
+ ++func f(x int) int { + return x/0 +} ++ +
+In Go 1.1, an integer division by constant zero is not a legal program, so it is a compile-time error. +
+ ++The definition of string and rune literals has been refined to exclude surrogate halves from the +set of valid Unicode code points. +See the Unicode section for more information. +
+ +
+Go 1.1 now implements
+method values,
+which are functions that have been bound to a specific receiver value.
+For instance, given a
+Writer
+value w
,
+the expression
+w.Write
,
+a method value, is a function that will always write to w
; it is equivalent to
+a function literal closing over w
:
+
+func (p []byte) (n int, err error) { + return w.Write(p) +} ++ +
+Method values are distinct from method expressions, which generate functions
+from methods of a given type; the method expression (*bufio.Writer).Write
+is equivalent to a function with an extra first argument, a receiver of type
+(*bufio.Writer)
:
+
+func (w *bufio.Writer, p []byte) (n int, err error) { + return w.Write(p) +} ++ +
+Updating: No existing code is affected; the change is strictly backward-compatible. +
+ +
+Before Go 1.1, a function that returned a value needed an explicit "return"
+or call to panic
at
+the end of the function; this was a simple way to make the programmer
+be explicit about the meaning of the function. But there are many cases
+where a final "return" is clearly unnecessary, such as a function with
+only an infinite "for" loop.
+
+In Go 1.1, the rule about final "return" statements is more permissive. +It introduces the concept of a +terminating statement, +a statement that is guaranteed to be the last one a function executes. +Examples include +"for" loops with no condition and "if-else" +statements in which each half ends in a "return". +If the final statement of a function can be shown syntactically to +be a terminating statement, no final "return" statement is needed. +
+ ++Note that the rule is purely syntactic: it pays no attention to the values in the +code and therefore requires no complex analysis. +
+ +
+Updating: The change is backward-compatible, but existing code
+with superfluous "return" statements and calls to panic
may
+be simplified manually.
+Such code can be identified by go vet
.
+
+The GCC release schedule does not coincide with the Go release schedule, so some skew is inevitable in
+gccgo
's releases.
+The 4.8.0 version of GCC shipped in March, 2013 and includes a nearly-Go 1.1 version of gccgo
.
+Its library is a little behind the release, but the biggest difference is that method values are not implemented.
+Sometime around July 2013, we expect 4.8.2 of GCC to ship with a gccgo
+providing a complete Go 1.1 implementation.
+
+In the gc toolchain, the compilers and linkers now use the
+same command-line flag parsing rules as the Go flag package, a departure
+from the traditional Unix flag parsing. This may affect scripts that invoke
+the tool directly.
+For example,
+go tool 6c -Fw -Dfoo
must now be written
+go tool 6c -F -w -D foo
.
+
+The language allows the implementation to choose whether the int
type and
+uint
types are 32 or 64 bits. Previous Go implementations made int
+and uint
32 bits on all systems. Both the gc and gccgo implementations
+now make
+int
and uint
64 bits on 64-bit platforms such as AMD64/x86-64.
+Among other things, this enables the allocation of slices with
+more than 2 billion elements on 64-bit platforms.
+
+Updating:
+Most programs will be unaffected by this change.
+Because Go does not allow implicit conversions between distinct
+numeric types,
+no programs will stop compiling due to this change.
+However, programs that contain implicit assumptions
+that int
is only 32 bits may change behavior.
+For example, this code prints a positive number on 64-bit systems and
+a negative one on 32-bit systems:
+
+x := ^uint32(0) // x is 0xffffffff +i := int(x) // i is -1 on 32-bit systems, 0xffffffff on 64-bit +fmt.Println(i) ++ +
Portable code intending 32-bit sign extension (yielding -1
on all systems)
+would instead say:
+
+i := int(int32(x)) ++ +
+On 64-bit architectures, the maximum heap size has been enlarged substantially, +from a few gigabytes to several tens of gigabytes. +(The exact details depend on the system and may change.) +
+ ++On 32-bit architectures, the heap size has not changed. +
+ ++Updating: +This change should have no effect on existing programs beyond allowing them +to run with larger heaps. +
+ +
+To make it possible to represent code points greater than 65535 in UTF-16,
+Unicode defines surrogate halves,
+a range of code points to be used only in the assembly of large values, and only in UTF-16.
+The code points in that surrogate range are illegal for any other purpose.
+In Go 1.1, this constraint is honored by the compiler, libraries, and run-time:
+a surrogate half is illegal as a rune value, when encoded as UTF-8, or when
+encoded in isolation as UTF-16.
+When encountered, for example in converting from a rune to UTF-8, it is
+treated as an encoding error and will yield the replacement rune,
+utf8.RuneError
,
+U+FFFD.
+
+This program, +
+ ++import "fmt" + +func main() { + fmt.Printf("%+q\n", string(0xD800)) +} ++ +
+printed "\ud800"
in Go 1.0, but prints "\ufffd"
in Go 1.1.
+
+Surrogate-half Unicode values are now illegal in rune and string constants, so constants such as
+'\ud800'
and "\ud800"
are now rejected by the compilers.
+When written explicitly as UTF-8 encoded bytes,
+such strings can still be created, as in "\xed\xa0\x80"
.
+However, when such a string is decoded as a sequence of runes, as in a range loop, it will yield only utf8.RuneError
+values.
+
+The Unicode byte order mark U+FEFF, encoded in UTF-8, is now permitted as the first +character of a Go source file. +Even though its appearance in the byte-order-free UTF-8 encoding is clearly unnecessary, +some editors add the mark as a kind of "magic number" identifying a UTF-8 encoded file. +
+ ++Updating: +Most programs will be unaffected by the surrogate change. +Programs that depend on the old behavior should be modified to avoid the issue. +The byte-order-mark change is strictly backward-compatible. +
+ +
+A major addition to the tools is a race detector, a way to
+find bugs in programs caused by concurrent access of the same
+variable, where at least one of the accesses is a write.
+This new facility is built into the go
tool.
+For now, it is only available on Linux, Mac OS X, and Windows systems with
+64-bit x86 processors.
+To enable it, set the -race
flag when building or testing your program
+(for instance, go test -race
).
+The race detector is documented in a separate article.
+
+Due to the change of the int
to 64 bits and
+a new internal representation of functions,
+the arrangement of function arguments on the stack has changed in the gc toolchain.
+Functions written in assembly will need to be revised at least
+to adjust frame pointer offsets.
+
+Updating:
+The go vet
command now checks that functions implemented in assembly
+match the Go function prototypes they implement.
+
+The go
command has acquired several
+changes intended to improve the experience for new Go users.
+
+First, when compiling, testing, or running Go code, the go
command will now give more detailed error messages,
+including a list of paths searched, when a package cannot be located.
+
+$ go build foo/quxx +can't load package: package foo/quxx: cannot find package "foo/quxx" in any of: + /home/you/go/src/pkg/foo/quxx (from $GOROOT) + /home/you/src/foo/quxx (from $GOPATH) ++ +
+Second, the go get
command no longer allows $GOROOT
+as the default destination when downloading package source.
+To use the go get
+command, a valid $GOPATH
is now required.
+
+$ GOPATH= go get code.google.com/p/foo/quxx +package code.google.com/p/foo/quxx: cannot download, $GOPATH not set. For more details see: go help gopath ++ +
+Finally, as a result of the previous change, the go get
command will also fail
+when $GOPATH
and $GOROOT
are set to the same value.
+
+$ GOPATH=$GOROOT go get code.google.com/p/foo/quxx +warning: GOPATH set to GOROOT (/home/you/go) has no effect +package code.google.com/p/foo/quxx: cannot download, $GOPATH must not be set to $GOROOT. For more details see: go help gopath ++ +
+The go test
+command no longer deletes the binary when run with profiling enabled,
+to make it easier to analyze the profile.
+The implementation sets the -c
flag automatically, so after running,
+
+$ go test -cpuprofile cpuprof.out mypackage ++ +
+the file mypackage.test
will be left in the directory where go test
was run.
+
+The go test
+command can now generate profiling information
+that reports where goroutines are blocked, that is,
+where they tend to stall waiting for an event such as a channel communication.
+The information is presented as a
+blocking profile
+enabled with the
+-blockprofile
+option of
+go test
.
+Run go help test
for more information.
+
+The fix
command, usually run as
+go fix
, no longer applies fixes to update code from
+before Go 1 to use Go 1 APIs.
+To update pre-Go 1 code to Go 1.1, use a Go 1.0 toolchain
+to convert the code to Go 1.0 first.
+
+The "go1.1
" tag has been added to the list of default
+build constraints.
+This permits packages to take advantage of the new features in Go 1.1 while
+remaining compatible with earlier versions of Go.
+
+To build a file only with Go 1.1 and above, add this build constraint: +
+ ++// +build go1.1 ++ +
+To build a file only with Go 1.0.x, use the converse constraint: +
+ ++// +build !go1.1 ++ +
+The Go 1.1 toolchain adds experimental support for freebsd/arm
,
+netbsd/386
, netbsd/amd64
, netbsd/arm
,
+openbsd/386
and openbsd/amd64
platforms.
+
+An ARMv6 or later processor is required for freebsd/arm
or
+netbsd/arm
.
+
+Go 1.1 adds experimental support for cgo
on linux/arm
.
+
+When cross-compiling, the go
tool will disable cgo
+support by default.
+
+To explicitly enable cgo
, set CGO_ENABLED=1
.
+
+The performance of code compiled with the Go 1.1 gc tool suite should be noticeably +better for most Go programs. +Typical improvements relative to Go 1.0 seem to be about 30%-40%, sometimes +much more, but occasionally less or even non-existent. +There are too many small performance-driven tweaks through the tools and libraries +to list them all here, but the following major changes are worth noting: +
+ +append
+and interface conversions.
+The various routines to scan textual input in the
+bufio
+package,
+ReadBytes
,
+ReadString
+and particularly
+ReadLine
,
+are needlessly complex to use for simple purposes.
+In Go 1.1, a new type,
+Scanner
,
+has been added to make it easier to do simple tasks such as
+read the input as a sequence of lines or space-delimited words.
+It simplifies the problem by terminating the scan on problematic
+input such as pathologically long lines, and having a simple
+default: line-oriented input, with each line stripped of its terminator.
+Here is code to reproduce the input a line at a time:
+
+scanner := bufio.NewScanner(os.Stdin) +for scanner.Scan() { + fmt.Println(scanner.Text()) // Println will add back the final '\n' +} +if err := scanner.Err(); err != nil { + fmt.Fprintln(os.Stderr, "reading standard input:", err) +} ++ +
+Scanning behavior can be adjusted through a function to control subdividing the input
+(see the documentation for SplitFunc
),
+but for tough problems or the need to continue past errors, the older interface
+may still be required.
+
+The protocol-specific resolvers in the net
package were formerly
+lax about the network name passed in.
+Although the documentation was clear
+that the only valid networks for
+ResolveTCPAddr
+are "tcp"
,
+"tcp4"
, and "tcp6"
, the Go 1.0 implementation silently accepted any string.
+The Go 1.1 implementation returns an error if the network is not one of those strings.
+The same is true of the other protocol-specific resolvers ResolveIPAddr
,
+ResolveUDPAddr
, and
+ResolveUnixAddr
.
+
+The previous implementation of
+ListenUnixgram
+returned a
+UDPConn
as
+a representation of the connection endpoint.
+The Go 1.1 implementation instead returns a
+UnixConn
+to allow reading and writing
+with its
+ReadFrom
+and
+WriteTo
+methods.
+
+The data structures
+IPAddr
,
+TCPAddr
, and
+UDPAddr
+add a new string field called Zone
.
+Code using untagged composite literals (e.g. net.TCPAddr{ip, port}
)
+instead of tagged literals (net.TCPAddr{IP: ip, Port: port}
)
+will break due to the new field.
+The Go 1 compatibility rules allow this change: client code must use tagged literals to avoid such breakages.
+
+Updating:
+To correct breakage caused by the new struct field,
+go fix
will rewrite code to add tags for these types.
+More generally, go vet
will identify composite literals that
+should be revised to use field tags.
+
+The reflect
package has several significant additions.
+
+It is now possible to run a "select" statement using
+the reflect
package; see the description of
+Select
+and
+SelectCase
+for details.
+
+The new method
+Value.Convert
+(or
+Type.ConvertibleTo
)
+provides functionality to execute a Go conversion or type assertion operation
+on a
+Value
+(or test for its possibility).
+
+The new function
+MakeFunc
+creates a wrapper function to make it easier to call a function with existing
+Values
,
+doing the standard Go conversions among the arguments, for instance
+to pass an actual int
to a formal interface{}
.
+
+Finally, the new functions
+ChanOf
,
+MapOf
+and
+SliceOf
+construct new
+Types
+from existing types, for example to construct the type []T
given
+only T
.
+
+On FreeBSD, Linux, NetBSD, OS X and OpenBSD, previous versions of the
+time
package
+returned times with microsecond precision.
+The Go 1.1 implementation on these
+systems now returns times with nanosecond precision.
+Programs that write to an external format with microsecond precision
+and read it back, expecting to recover the original value, will be affected
+by the loss of precision.
+There are two new methods of Time
,
+Round
+and
+Truncate
,
+that can be used to remove precision from a time before passing it to
+external storage.
+
+The new method
+YearDay
+returns the one-indexed integral day number of the year specified by the time value.
+
+The
+Timer
+type has a new method
+Reset
+that modifies the timer to expire after a specified duration.
+
+Finally, the new function
+ParseInLocation
+is like the existing
+Parse
+but parses the time in the context of a location (time zone), ignoring
+time zone information in the parsed string.
+This function addresses a common source of confusion in the time API.
+
+Updating: +Code that needs to read and write times using an external format with +lower precision should be modified to use the new methods. +
+ +
+To make it easier for binary distributions to access them if desired, the exp
+and old
source subtrees, which are not included in binary distributions,
+have been moved to the new go.exp
subrepository at
+code.google.com/p/go.exp
. To access the ssa
package,
+for example, run
+
+$ go get code.google.com/p/go.exp/ssa ++ +
+and then in Go source, +
+ ++import "code.google.com/p/go.exp/ssa" ++ +
+The old package exp/norm
has also been moved, but to a new repository
+go.text
, where the Unicode APIs and other text-related packages will
+be developed.
+
+There are three new packages. +
+ +go/format
package provides
+a convenient way for a program to access the formatting capabilities of the
+go fmt
command.
+It has two functions,
+Node
to format a Go parser
+Node
,
+and
+Source
+to reformat arbitrary Go source code into the standard format as provided by the
+go fmt
command.
+net/http/cookiejar
package provides the basics for managing HTTP cookies.
+runtime/race
package provides low-level facilities for data race detection.
+It is internal to the race detector and does not otherwise export any user-visible functionality.
++The following list summarizes a number of minor changes to the library, mostly additions. +See the relevant package documentation for more information about each change. +
+ +bytes
package has two new functions,
+TrimPrefix
+and
+TrimSuffix
,
+with self-evident properties.
+Also, the Buffer
type
+has a new method
+Grow
that
+provides some control over memory allocation inside the buffer.
+Finally, the
+Reader
type now has a
+WriteTo
method
+so it implements the
+io.WriterTo
interface.
+compress/gzip
package has
+a new Flush
+method for its
+Writer
+type that flushes its underlying flate.Writer
.
+crypto/hmac
package has a new function,
+Equal
, to compare two MACs.
+crypto/x509
package
+now supports PEM blocks (see
+DecryptPEMBlock
for instance),
+and a new function
+ParseECPrivateKey
to parse elliptic curve private keys.
+database/sql
package
+has a new
+Ping
+method for its
+DB
+type that tests the health of the connection.
+database/sql/driver
package
+has a new
+Queryer
+interface that a
+Conn
+may implement to improve performance.
+encoding/json
package's
+Decoder
+has a new method
+Buffered
+to provide access to the remaining data in its buffer,
+as well as a new method
+UseNumber
+to unmarshal a value into the new type
+Number
,
+a string, rather than a float64.
+encoding/xml
package
+has a new function,
+EscapeText
,
+which writes escaped XML output,
+and a method on
+Encoder
,
+Indent
,
+to specify indented output.
+go/ast
package, a
+new type CommentMap
+and associated methods makes it easier to extract and process comments in Go programs.
+go/doc
package,
+the parser now keeps better track of stylized annotations such as TODO(joe)
+throughout the code,
+information that the godoc
+command can filter or present according to the value of the -notes
flag.
+html/template
+package has been removed; programs that depend on it will break.
+image/jpeg
package now
+reads progressive JPEG files and handles a few more subsampling configurations.
+io
package now exports the
+io.ByteWriter
interface to capture the common
+functionality of writing a byte at a time.
+It also exports a new error, ErrNoProgress
,
+used to indicate a Read
implementation is looping without delivering data.
+log/syslog
package now provides better support
+for OS-specific logging features.
+math/big
package's
+Int
type
+now has methods
+MarshalJSON
+and
+UnmarshalJSON
+to convert to and from a JSON representation.
+Also,
+Int
+can now convert directly to and from a uint64
using
+Uint64
+and
+SetUint64
,
+while
+Rat
+can do the same with float64
using
+Float64
+and
+SetFloat64
.
+mime/multipart
package
+has a new method for its
+Writer
,
+SetBoundary
,
+to define the boundary separator used to package the output.
+The Reader
also now
+transparently decodes any quoted-printable
parts and removes
+the Content-Transfer-Encoding
header when doing so.
+net
package's
+ListenUnixgram
+function has changed return types: it now returns a
+UnixConn
+rather than a
+UDPConn
, which was
+clearly a mistake in Go 1.0.
+Since this API change fixes a bug, it is permitted by the Go 1 compatibility rules.
+net
package includes a new type,
+Dialer
, to supply options to
+Dial
.
+net
package adds support for
+link-local IPv6 addresses with zone qualifiers, such as fe80::1%lo0
.
+The address structures IPAddr
,
+UDPAddr
, and
+TCPAddr
+record the zone in a new field, and functions that expect string forms of these addresses, such as
+Dial
,
+ResolveIPAddr
,
+ResolveUDPAddr
, and
+ResolveTCPAddr
,
+now accept the zone-qualified form.
+net
package adds
+LookupNS
to its suite of resolving functions.
+LookupNS
returns the NS records for a host name.
+net
package adds protocol-specific
+packet reading and writing methods to
+IPConn
+(ReadMsgIP
+and WriteMsgIP
) and
+UDPConn
+(ReadMsgUDP
and
+WriteMsgUDP
).
+These are specialized versions of PacketConn
's
+ReadFrom
and WriteTo
methods that provide access to out-of-band data associated
+with the packets.
+ net
package adds methods to
+UnixConn
to allow closing half of the connection
+(CloseRead
and
+CloseWrite
),
+matching the existing methods of TCPConn
.
+net/http
package includes several new additions.
+ParseTime
parses a time string, trying
+several common HTTP time formats.
+The PostFormValue
method of
+Request
is like
+FormValue
but ignores URL parameters.
+The CloseNotifier
interface provides a mechanism
+for a server handler to discover when a client has disconnected.
+The ServeMux
type now has a
+Handler
method to access a path's
+Handler
without executing it.
+The Transport
can now cancel an in-flight request with
+CancelRequest
.
+Finally, the Transport is now more aggressive at closing TCP connections when
+a Response.Body
is closed before
+being fully consumed.
+net/mail
package has two new functions,
+ParseAddress
and
+ParseAddressList
,
+to parse RFC 5322-formatted mail addresses into
+Address
structures.
+net/smtp
package's
+Client
type has a new method,
+Hello
,
+which transmits a HELO
or EHLO
message to the server.
+net/textproto
package
+has two new functions,
+TrimBytes
and
+TrimString
,
+which do ASCII-only trimming of leading and trailing spaces.
+os.FileMode.IsRegular
makes it easy to ask if a file is a plain file.
+os/signal
package has a new function,
+Stop
, which stops the package delivering
+any further signals to the channel.
+regexp
package
+now supports Unix-original leftmost-longest matches through the
+Regexp.Longest
+method, while
+Regexp.Split
slices
+strings into pieces based on separators defined by the regular expression.
+runtime/debug
package
+has three new functions regarding memory usage.
+The FreeOSMemory
+function triggers a run of the garbage collector and then attempts to return unused
+memory to the operating system;
+the ReadGCStats
+function retrieves statistics about the collector; and
+SetGCPercent
+provides a programmatic way to control how often the collector runs,
+including disabling it altogether.
+sort
package has a new function,
+Reverse
.
+Wrapping the argument of a call to
+sort.Sort
+with a call to Reverse
causes the sort order to be reversed.
+strings
package has two new functions,
+TrimPrefix
+and
+TrimSuffix
+with self-evident properties, and the new method
+Reader.WriteTo
so the
+Reader
+type now implements the
+io.WriterTo
interface.
+syscall
package's
+Fchflags
function on various BSDs
+(including Darwin) has changed signature.
+It now takes an int as the first parameter instead of a string.
+Since this API change fixes a bug, it is permitted by the Go 1 compatibility rules.
+syscall
package also has received many updates
+to make it more inclusive of constants and system calls for each supported operating system.
+testing
package now automates the generation of allocation
+statistics in tests and benchmarks using the new
+AllocsPerRun
function. And the
+ReportAllocs
+method on testing.B
will enable printing of
+memory allocation statistics for the calling benchmark. It also introduces the
+AllocsPerOp
method of
+BenchmarkResult
.
+There is also a new
+Verbose
function to test the state of the -v
+command-line flag,
+and a new
+Skip
method of
+testing.B
and
+testing.T
+to simplify skipping an inappropriate test.
+text/template
+and
+html/template
packages,
+templates can now use parentheses to group the elements of pipelines, simplifying the construction of complex pipelines.
+Also, as part of the new parser, the
+Node
interface got two new methods to provide
+better error reporting.
+Although this violates the Go 1 compatibility rules,
+no existing code should be affected because this interface is explicitly intended only to be used
+by the
+text/template
+and
+html/template
+packages and there are safeguards to guarantee that.
+unicode
package has been updated to Unicode version 6.2.0.
+unicode/utf8
package,
+the new function ValidRune
reports whether the rune is a valid Unicode code point.
+To be valid, a rune must be in range and not be a surrogate half.
++The latest Go release, version 1.10, arrives six months after Go 1.9. +Most of its changes are in the implementation of the toolchain, runtime, and libraries. +As always, the release maintains the Go 1 promise of compatibility. +We expect almost all Go programs to continue to compile and run as before. +
+ +
+This release improves caching of built packages,
+adds caching of successful test results,
+runs vet automatically during tests,
+and
+permits passing string values directly between Go and C using cgo.
+A new hard-coded set of safe compiler options may cause
+unexpected invalid
+flag
errors in code that built successfully with older
+releases.
+
+There are no significant changes to the language specification. +
+ +
+A corner case involving shifts of untyped constants has been clarified,
+and as a result the compilers have been updated to allow the index expression
+x[1.0
<<
s]
where s
is an unsigned integer;
+the go/types package already did.
+
+The grammar for method expressions has been updated to relax the
+syntax to allow any type expression as a receiver;
+this matches what the compilers were already implementing.
+For example, struct{io.Reader}.Read
is a valid, if unusual,
+method expression that the compilers already accepted and is
+now permitted by the language grammar.
+
+There are no new supported operating systems or processor architectures in this release. +Most of the work has focused on strengthening the support for existing ports, +in particular new instructions in the assembler +and improvements to the code generated by the compilers. +
+ ++As announced in the Go 1.9 release notes, +Go 1.10 now requires FreeBSD 10.3 or later; +support for FreeBSD 9.3 has been removed. +
+ +
+Go now runs on NetBSD again but requires the unreleased NetBSD 8.
+Only GOARCH
amd64
and 386
have
+been fixed. The arm
port is still broken.
+
+On 32-bit MIPS systems, the new environment variable settings
+GOMIPS=hardfloat
(the default) and
+GOMIPS=softfloat
select whether to use
+hardware instructions or software emulation for floating-point computations.
+
+Go 1.10 is the last release that will run on OpenBSD 6.0. +Go 1.11 will require OpenBSD 6.2. +
+ ++Go 1.10 is the last release that will run on OS X 10.8 Mountain Lion or OS X 10.9 Mavericks. +Go 1.11 will require OS X 10.10 Yosemite or later. +
+ ++Go 1.10 is the last release that will run on Windows XP or Windows Vista. +Go 1.11 will require Windows 7 or later. +
+ +
+If the environment variable $GOROOT
is unset,
+the go tool previously used the default GOROOT
+set during toolchain compilation.
+Now, before falling back to that default, the go tool attempts to
+deduce GOROOT
from its own executable path.
+This allows binary distributions to be unpacked anywhere in the
+file system and then be used without setting GOROOT
+explicitly.
+
+By default, the go tool creates its temporary files and directories
+in the system temporary directory (for example, $TMPDIR
on Unix).
+If the new environment variable $GOTMPDIR
is set,
+the go tool will creates its temporary files and directories in that directory instead.
+
+The go
build
command now detects out-of-date packages
+purely based on the content of source files, specified build flags, and metadata stored in the compiled packages.
+Modification times are no longer consulted or relevant.
+The old advice to add -a
to force a rebuild in cases where
+the modification times were misleading for one reason or another
+(for example, changes in build flags) is no longer necessary:
+builds now always detect when packages must be rebuilt.
+(If you observe otherwise, please file a bug.)
+
+The go
build
-asmflags
, -gcflags
, -gccgoflags
, and -ldflags
options
+now apply by default only to the packages listed directly on the command line.
+For example, go
build
-gcflags=-m
mypkg
+passes the compiler the -m
flag when building mypkg
+but not its dependencies.
+The new, more general form -asmflags=pattern=flags
(and similarly for the others)
+applies the flags
only to the packages matching the pattern.
+For example: go
install
-ldflags=cmd/gofmt=-X=main.version=1.2.3
cmd/...
+installs all the commands matching cmd/...
but only applies the -X
option
+to the linker flags for cmd/gofmt
.
+For more details, see go
help
build
.
+
+The go
build
command now maintains a cache of
+recently built packages, separate from the installed packages in $GOROOT/pkg
or $GOPATH/pkg
.
+The effect of the cache should be to speed builds that do not explicitly install packages
+or when switching between different copies of source code (for example, when changing
+back and forth between different branches in a version control system).
+The old advice to add the -i
flag for speed, as in go
build
-i
+or go
test
-i
,
+is no longer necessary: builds run just as fast without -i
.
+For more details, see go
help
cache
.
+
+The go
install
command now installs only the
+packages and commands listed directly on the command line.
+For example, go
install
cmd/gofmt
+installs the gofmt program but not any of the packages on which it depends.
+The new build cache makes future commands still run as quickly as if the
+dependencies had been installed.
+To force the installation of dependencies, use the new
+go
install
-i
flag.
+Installing dependency packages should not be necessary in general,
+and the very concept of installed packages may disappear in a future release.
+
+Many details of the go
build
implementation have changed to support these improvements.
+One new requirement implied by these changes is that
+binary-only packages must now declare accurate import blocks in their
+stub source code, so that those imports can be made available when
+linking a program using the binary-only package.
+For more details, see go
help
filetype
.
+
+The go
test
command now caches test results:
+if the test executable and command line match a previous run
+and the files and environment variables consulted by that run
+have not changed either, go
test
will print
+the previous test output, replacing the elapsed time with the string “(cached).”
+Test caching applies only to successful test results;
+only to go
test
+commands with an explicit list of packages; and
+only to command lines using a subset of the
+-cpu
, -list
, -parallel
,
+-run
, -short
, and -v
test flags.
+The idiomatic way to bypass test caching is to use -count=1
.
+
+The go
test
command now automatically runs
+go
vet
on the package being tested,
+to identify significant problems before running the test.
+Any such problems are treated like build errors and prevent execution of the test.
+Only a high-confidence subset of the available go
vet
+checks are enabled for this automatic check.
+To disable the running of go
vet
, use
+go
test
-vet=off
.
+
+The go
test
-coverpkg
flag now
+interprets its argument as a comma-separated list of patterns to match against
+the dependencies of each test, not as a list of packages to load anew.
+For example, go
test
-coverpkg=all
+is now a meaningful way to run a test with coverage enabled for the test package
+and all its dependencies.
+Also, the go
test
-coverprofile
option is now
+supported when running multiple tests.
+
+In case of failure due to timeout, tests are now more likely to write their profiles before exiting. +
+ +
+The go
test
command now always
+merges the standard output and standard error from a given test binary execution
+and writes both to go
test
's standard output.
+In past releases, go
test
only applied this
+merging most of the time.
+
+The go
test
-v
output
+now includes PAUSE
and CONT
status update
+lines to mark when parallel tests pause and continue.
+
+The new go
test
-failfast
flag
+disables running additional tests after any test fails.
+Note that tests running in parallel with the failing test are allowed to complete.
+
+Finally, the new go
test
-json
flag
+filters test output through the new command
+go
tool
test2json
+to produce a machine-readable JSON-formatted description of test execution.
+This allows the creation of rich presentations of test execution
+in IDEs and other tools.
+
+For more details about all these changes,
+see go
help
test
+and the test2json documentation.
+
+Options specified by cgo using #cgo CFLAGS
and the like
+are now checked against a list of permitted options.
+This closes a security hole in which a downloaded package uses
+compiler options like
+-fplugin
+to run arbitrary code on the machine where it is being built.
+This can cause a build error such as invalid flag in #cgo CFLAGS
.
+For more background, and how to handle this error, see
+https://golang.org/s/invalidflag.
+
+Cgo now implements a C typedef like “typedef
X
Y
” using a Go type alias,
+so that Go code may use the types C.X
and C.Y
interchangeably.
+It also now supports the use of niladic function-like macros.
+Also, the documentation has been updated to clarify that
+Go structs and Go arrays are not supported in the type signatures of cgo-exported functions.
+
+Cgo now supports direct access to Go string values from C.
+Functions in the C preamble may use the type _GoString_
+to accept a Go string as an argument.
+C code may call _GoStringLen
and _GoStringPtr
+for direct access to the contents of the string.
+A value of type _GoString_
+may be passed in a call to an exported Go function that takes an argument of Go type string
.
+
+During toolchain bootstrap, the environment variables CC
and CC_FOR_TARGET
specify
+the default C compiler that the resulting toolchain will use for host and target builds, respectively.
+However, if the toolchain will be used with multiple targets, it may be necessary to specify a different C compiler for each
+(for example, a different compiler for darwin/arm64
versus linux/ppc64le
).
+The new set of environment variables CC_FOR_goos_goarch
+allows specifying a different default C compiler for each target.
+Note that these variables only apply during toolchain bootstrap,
+to set the defaults used by the resulting toolchain.
+Later go
build
commands use the CC
environment
+variable or else the built-in default.
+
+Cgo now translates some C types that would normally map to a pointer
+type in Go, to a uintptr
instead. These types include
+the CFTypeRef
hierarchy in Darwin's CoreFoundation
+framework and the jobject
hierarchy in Java's JNI
+interface.
+
+These types must be uintptr
on the Go side because they
+would otherwise confuse the Go garbage collector; they are sometimes
+not really pointers but data structures encoded in a pointer-sized integer.
+Pointers to Go memory must not be stored in these uintptr
values.
+
+Because of this change, values of the affected types need to be
+zero-initialized with the constant 0
instead of the
+constant nil
. Go 1.10 provides gofix
+modules to help with that rewrite:
+
+go tool fix -r cftype <pkg> +go tool fix -r jni <pkg> ++ +
+For more details, see the cgo documentation. +
+ +
+The go
doc
tool now adds functions returning slices of T
or *T
+to the display of type T
, similar to the existing behavior for functions returning single T
or *T
results.
+For example:
+
+$ go doc mail.Address +package mail // import "net/mail" + +type Address struct { + Name string + Address string +} + Address represents a single mail address. + +func ParseAddress(address string) (*Address, error) +func ParseAddressList(list string) ([]*Address, error) +func (a *Address) String() string +$ ++ +
+Previously, ParseAddressList
was only shown in the package overview (go
doc
mail
).
+
+The go
fix
tool now replaces imports of "golang.org/x/net/context"
+with "context"
.
+(Forwarding aliases in the former make it completely equivalent to the latter when using Go 1.9 or later.)
+
+The go
get
command now supports Fossil source code repositories.
+
+The blocking and mutex profiles produced by the runtime/pprof
package
+now include symbol information, so they can be viewed
+in go
tool
pprof
+without the binary that produced the profile.
+(All other profile types were changed to include symbol information in Go 1.9.)
+
+The go
tool
pprof
+profile visualizer has been updated to git version 9e20b5b (2017-11-08)
+from github.com/google/pprof,
+which includes an updated web interface.
+
+The go
vet
command now always has access to
+complete, up-to-date type information when checking packages, even for packages using cgo or vendored imports.
+The reports should be more accurate as a result.
+Note that only go
vet
has access to this information;
+the more low-level go
tool
vet
does not
+and should be avoided except when working on vet
itself.
+(As of Go 1.9, go
vet
provides access to all the same flags as
+go
tool
vet
.)
+
+This release includes a new overview of available Go program diagnostic tools. +
+ +
+Two minor details of the default formatting of Go source code have changed.
+First, certain complex three-index slice expressions previously formatted like
+x[i+1
:
j:k]
and now
+format with more consistent spacing: x[i+1
:
j
:
k]
.
+Second, single-method interface literals written on a single line,
+which are sometimes used in type assertions,
+are no longer split onto multiple lines.
+
+Note that these kinds of minor updates to gofmt are expected from time to time. +In general, we recommend against building systems that check that source code +matches the output of a specific version of gofmt. +For example, a continuous integration test that fails if any code already checked into +a repository is not “properly formatted” is inherently fragile and not recommended. +
+ +
+If multiple programs must agree about which version of gofmt is used to format a source file,
+we recommend that they do this by arranging to invoke the same gofmt binary.
+For example, in the Go open source repository, our Git pre-commit hook is written in Go
+and could import go/format
directly, but instead it invokes the gofmt
+binary found in the current path, so that the pre-commit hook need not be recompiled
+each time gofmt
changes.
+
+The compiler includes many improvements to the performance of generated code, +spread fairly evenly across the supported architectures. +
+ ++The DWARF debug information recorded in binaries has been improved in a few ways: +constant values are now recorded; +line number information is more accurate, making source-level stepping through a program work better; +and each package is now presented as its own DWARF compilation unit. +
+ +
+The various build modes
+have been ported to more systems.
+Specifically, c-shared
now works on linux/ppc64le
, windows/386
, and windows/amd64
;
+pie
now works on darwin/amd64
and also forces the use of external linking on all systems;
+and plugin
now works on linux/ppc64le
and darwin/amd64
.
+
+The linux/ppc64le
port now requires the use of external linking
+with any programs that use cgo, even uses by the standard library.
+
+For the ARM 32-bit port, the assembler now supports the instructions
+BFC
,
+BFI
,
+BFX
,
+BFXU
,
+FMULAD
,
+FMULAF
,
+FMULSD
,
+FMULSF
,
+FNMULAD
,
+FNMULAF
,
+FNMULSD
,
+FNMULSF
,
+MULAD
,
+MULAF
,
+MULSD
,
+MULSF
,
+NMULAD
,
+NMULAF
,
+NMULD
,
+NMULF
,
+NMULSD
,
+NMULSF
,
+XTAB
,
+XTABU
,
+XTAH
,
+and
+XTAHU
.
+
+For the ARM 64-bit port, the assembler now supports the
+VADD
,
+VADDP
,
+VADDV
,
+VAND
,
+VCMEQ
,
+VDUP
,
+VEOR
,
+VLD1
,
+VMOV
,
+VMOVI
,
+VMOVS
,
+VORR
,
+VREV32
,
+and
+VST1
+instructions.
+
+For the PowerPC 64-bit port, the assembler now supports the POWER9 instructions
+ADDEX
,
+CMPEQB
,
+COPY
,
+DARN
,
+LDMX
,
+MADDHD
,
+MADDHDU
,
+MADDLD
,
+MFVSRLD
,
+MTVSRDD
,
+MTVSRWS
,
+PASTECC
,
+VCMPNEZB
,
+VCMPNEZBCC
,
+and
+VMSUMUDM
.
+
+For the S390X port, the assembler now supports the
+TMHH
,
+TMHL
,
+TMLH
,
+and
+TMLL
+instructions.
+
+For the X86 64-bit port, the assembler now supports 359 new instructions,
+including the full AVX, AVX2, BMI, BMI2, F16C, FMA3, SSE2, SSE3, SSSE3, SSE4.1, and SSE4.2 extension sets.
+The assembler also no longer implements MOVL
$0,
AX
+as an XORL
instruction,
+to avoid clearing the condition flags unexpectedly.
+
+Due to the alignment of Go's semiannual release schedule with GCC's +annual release schedule, +GCC release 7 contains the Go 1.8.3 version of gccgo. +We expect that the next release, GCC 8, will contain the Go 1.10 +version of gccgo. +
+ +
+The behavior of nested calls to
+LockOSThread
and
+UnlockOSThread
+has changed.
+These functions control whether a goroutine is locked to a specific operating system thread,
+so that the goroutine only runs on that thread, and the thread only runs that goroutine.
+Previously, calling LockOSThread
more than once in a row
+was equivalent to calling it once, and a single UnlockOSThread
+always unlocked the thread.
+Now, the calls nest: if LockOSThread
is called multiple times,
+UnlockOSThread
must be called the same number of times
+in order to unlock the thread.
+Existing code that was careful not to nest these calls will remain correct.
+Existing code that incorrectly assumed the calls nested will become correct.
+Most uses of these functions in public Go source code falls into the second category.
+
+Because one common use of LockOSThread
and UnlockOSThread
+is to allow Go code to reliably modify thread-local state (for example, Linux or Plan 9 name spaces),
+the runtime now treats locked threads as unsuitable for reuse or for creating new threads.
+
+Stack traces no longer include implicit wrapper functions (previously marked <autogenerated>
),
+unless a fault or panic happens in the wrapper itself.
+As a result, skip counts passed to functions like Caller
+should now always match the structure of the code as written, rather than depending on
+optimization decisions and implementation details.
+
+The garbage collector has been modified to reduce its impact on allocation latency. +It now uses a smaller fraction of the overall CPU when running, but it may run more of the time. +The total CPU consumed by the garbage collector has not changed significantly. +
+ +
+The GOROOT
function
+now defaults (when the $GOROOT
environment variable is not set)
+to the GOROOT
or GOROOT_FINAL
in effect
+at the time the calling program was compiled.
+Previously it used the GOROOT
or GOROOT_FINAL
in effect
+at the time the toolchain that compiled the calling program was compiled.
+
+There is no longer a limit on the GOMAXPROCS
setting.
+(In Go 1.9 the limit was 1024.)
+
+As always, the changes are so general and varied that precise +statements about performance are difficult to make. Most programs +should run a bit faster, due to speedups in the garbage collector, +better generated code, and optimizations in the core library. +
+ ++Many applications should experience significantly lower allocation latency and overall performance overhead when the garbage collector is active. +
+ ++All of the changes to the standard library are minor. +The changes in bytes +and net/url are the most likely to require updating of existing programs. +
+ ++As always, there are various minor changes and updates to the library, +made with the Go 1 promise of compatibility +in mind. +
+ ++In general, the handling of special header formats is significantly improved and expanded. +
+
+FileInfoHeader
has always
+recorded the Unix UID and GID numbers from its os.FileInfo
argument
+(specifically, from the system-dependent information returned by the FileInfo
's Sys
method)
+in the returned Header
.
+Now it also records the user and group names corresponding to those IDs,
+as well as the major and minor device numbers for device files.
+
+The new Header.Format
field
+of type Format
+controls which tar header format the Writer
uses.
+The default, as before, is to select the most widely-supported header type
+that can encode the fields needed by the header (USTAR if possible, or else PAX if possible, or else GNU).
+The Reader
sets Header.Format
for each header it reads.
+
+Reader
and the Writer
now support arbitrary PAX records,
+using the new Header.PAXRecords
field,
+a generalization of the existing Xattrs
field.
+
+The Reader
no longer insists that the file name or link name in GNU headers
+be valid UTF-8.
+
+When writing PAX- or GNU-format headers, the Writer
now includes
+the Header.AccessTime
and Header.ChangeTime
fields (if set).
+When writing PAX-format headers, the times include sub-second precision.
+
+Go 1.10 adds more complete support for times and character set encodings in ZIP archives. +
+
+The original ZIP format used the standard MS-DOS encoding of year, month, day, hour, minute, and second into fields in two 16-bit values.
+That encoding cannot represent time zones or odd seconds, so multiple extensions have been
+introduced to allow richer encodings.
+In Go 1.10, the Reader
and Writer
+now support the widely-understood Info-Zip extension that encodes the time separately in the 32-bit Unix “seconds since epoch” form.
+The FileHeader
's new Modified
field of type time.Time
+obsoletes the ModifiedTime
and ModifiedDate
fields, which continue to hold the MS-DOS encoding.
+The Reader
and Writer
now adopt the common
+convention that a ZIP archive storing a time zone-independent Unix time
+also stores the local time in the MS-DOS field,
+so that the time zone offset can be inferred.
+For compatibility, the ModTime
and
+SetModTime
methods
+behave the same as in earlier releases; new code should use Modified
directly.
+
+The header for each file in a ZIP archive has a flag bit indicating whether
+the name and comment fields are encoded as UTF-8, as opposed to a system-specific default encoding.
+In Go 1.8 and earlier, the Writer
never set the UTF-8 bit.
+In Go 1.9, the Writer
changed to set the UTF-8 bit almost always.
+This broke the creation of ZIP archives containing Shift-JIS file names.
+In Go 1.10, the Writer
now sets the UTF-8 bit only when
+both the name and the comment field are valid UTF-8 and at least one is non-ASCII.
+Because non-ASCII encodings very rarely look like valid UTF-8, the new
+heuristic should be correct nearly all the time.
+Setting a FileHeader
's new NonUTF8
field to true
+disables the heuristic entirely for that file.
+
+The Writer
also now supports setting the end-of-central-directory record's comment field,
+by calling the Writer
's new SetComment
method.
+
+The new Reader.Size
+and Writer.Size
+methods report the Reader
or Writer
's underlying buffer size.
+
+The
+Fields
,
+FieldsFunc
,
+Split
,
+and
+SplitAfter
+functions have always returned subslices of their inputs.
+Go 1.10 changes each returned subslice to have capacity equal to its length,
+so that appending to one cannot overwrite adjacent data in the original input.
+
+NewOFB
now panics if given
+an initialization vector of incorrect length, like the other constructors in the
+package always have.
+(Previously it returned a nil Stream
implementation.)
+
+The TLS server now advertises support for SHA-512 signatures when using TLS 1.2. +The server already supported the signatures, but some clients would not select +them unless explicitly advertised. +
+
+Certificate.Verify
+now enforces the name constraints for all
+names contained in the certificate, not just the one name that a client has asked about.
+Extended key usage restrictions are similarly now checked all at once.
+As a result, after a certificate has been validated, now it can be trusted in its entirety.
+It is no longer necessary to revalidate the certificate for each additional name
+or key usage.
+
+Parsed certificates also now report URI names and IP, email, and URI constraints, using the new
+Certificate
fields
+URIs
, PermittedIPRanges
, ExcludedIPRanges
,
+PermittedEmailAddresses
, ExcludedEmailAddresses
,
+PermittedURIDomains
, and ExcludedURIDomains
. Certificates with
+invalid values for those fields are now rejected.
+
+The new MarshalPKCS1PublicKey
+and ParsePKCS1PublicKey
+functions convert an RSA public key to and from PKCS#1-encoded form.
+
+The new MarshalPKCS8PrivateKey
+function converts a private key to PKCS#8-encoded form.
+(ParsePKCS8PrivateKey
+has existed since Go 1.)
+
+Name
now implements a
+String
method that
+formats the X.509 distinguished name in the standard RFC 2253 format.
+
+Drivers that currently hold on to the destination buffer provided by
+driver.Rows.Next
should ensure they no longer
+write to a buffer assigned to the destination array outside of that call.
+Drivers must be careful that underlying buffers are not modified when closing
+driver.Rows
.
+
+Drivers that want to construct a sql.DB
for
+their clients can now implement the Connector
interface
+and call the new sql.OpenDB
function,
+instead of needing to encode all configuration into a string
+passed to sql.Open
.
+
+Drivers that want to parse the configuration string only once per sql.DB
+instead of once per sql.Conn
,
+or that want access to each sql.Conn
's underlying context,
+can make their Driver
+implementations also implement DriverContext
's
+new OpenConnector
method.
+
+Drivers that implement ExecerContext
+no longer need to implement Execer
;
+similarly, drivers that implement QueryerContext
+no longer need to implement Queryer
.
+Previously, even if the context-based interfaces were implemented they were ignored
+unless the non-context-based interfaces were also implemented.
+
+To allow drivers to better isolate different clients using a cached driver connection in succession,
+if a Conn
implements the new
+SessionResetter
interface,
+database/sql
will now call ResetSession
before
+reusing the Conn
for a new client.
+
+This release adds 348 new relocation constants divided between the relocation types
+R_386
,
+R_AARCH64
,
+R_ARM
,
+R_PPC64
,
+and
+R_X86_64
.
+
+Go 1.10 adds support for reading relocations from Mach-O sections,
+using the Section
struct's new Relocs
field
+and the new Reloc
,
+RelocTypeARM
,
+RelocTypeARM64
,
+RelocTypeGeneric
,
+and
+RelocTypeX86_64
+types and associated constants.
+
+Go 1.10 also adds support for the LC_RPATH
load command,
+represented by the types
+RpathCmd
and
+Rpath
,
+and new named constants
+for the various flag bits found in headers.
+
+Marshal
now correctly encodes
+strings containing asterisks as type UTF8String instead of PrintableString,
+unless the string is in a struct field with a tag forcing the use of PrintableString.
+Marshal
also now respects struct tags containing application
directives.
+
+The new MarshalWithParams
+function marshals its argument as if the additional params were its associated
+struct field tag.
+
+Unmarshal
now respects
+struct field tags using the explicit
and tag
+directives.
+
+Both Marshal
and Unmarshal
now support a new struct field tag
+numeric
, indicating an ASN.1 NumericString.
+
+Reader
now disallows the use of
+nonsensical Comma
and Comment
settings,
+such as NUL, carriage return, newline, invalid runes, and the Unicode replacement character,
+or setting Comma
and Comment
equal to each other.
+
+In the case of a syntax error in a CSV record that spans multiple input lines, Reader
+now reports the line on which the record started in the ParseError
's new StartLine
field.
+
+The new functions
+NewEncoder
+and
+NewDecoder
+provide streaming conversions to and from hexadecimal,
+analogous to equivalent functions already in
+encoding/base32
+and
+encoding/base64.
+
+When the functions
+Decode
+and
+DecodeString
+encounter malformed input,
+they now return the number of bytes already converted
+along with the error.
+Previously they always returned a count of 0 with any error.
+
+The Decoder
+adds a new method
+DisallowUnknownFields
+that causes it to report inputs with unknown JSON fields as a decoding error.
+(The default behavior has always been to discard unknown fields.)
+
+As a result of fixing a reflect bug,
+Unmarshal
+can no longer decode into fields inside
+embedded pointers to unexported struct types,
+because it cannot initialize the unexported embedded pointer
+to point at fresh storage.
+Unmarshal
now returns an error in this case.
+
+Encode
+and
+EncodeToMemory
+no longer generate partial output when presented with a
+block that is impossible to encode as PEM data.
+
+The new function
+NewTokenDecoder
+is like
+NewDecoder
+but creates a decoder reading from a TokenReader
+instead of an XML-formatted byte stream.
+This is meant to enable the construction of XML stream transformers in client libraries.
+
+The default
+Usage
function now prints
+its first line of output to
+CommandLine.Output()
+instead of assuming os.Stderr
,
+so that the usage message is properly redirected for
+clients using CommandLine.SetOutput
.
+
+PrintDefaults
now
+adds appropriate indentation after newlines in flag usage strings,
+so that multi-line usage strings display nicely.
+
+FlagSet
adds new methods
+ErrorHandling
,
+Name
,
+and
+Output
,
+to retrieve the settings passed to
+NewFlagSet
+and
+FlagSet.SetOutput
.
+
+To support the doc change described above,
+functions returning slices of T
, *T
, **T
, and so on
+are now reported in T
's Type
's Funcs
list,
+instead of in the Package
's Funcs
list.
+
+The For
function now accepts a non-nil lookup argument.
+
+The changes to the default formatting of Go source code +discussed in the gofmt section above +are implemented in the go/printer package +and also affect the output of the higher-level go/format package. +
+
+Implementations of the Hash
interface are now
+encouraged to implement encoding.BinaryMarshaler
+and encoding.BinaryUnmarshaler
+to allow saving and recreating their internal state,
+and all implementations in the standard library
+(hash/crc32, crypto/sha256, and so on)
+now implement those interfaces.
+
+The new Srcset
content
+type allows for proper handling of values within the
+srcset
+attribute of img
tags.
+
+Int
now supports conversions to and from bases 2 through 62
+in its SetString
and Text
methods.
+(Previously it only allowed bases 2 through 36.)
+The value of the constant MaxBase
has been updated.
+
+Int
adds a new
+CmpAbs
method
+that is like Cmp
but
+compares only the absolute values (not the signs) of its arguments.
+
+Branch cuts and other boundary cases in
+Asin
,
+Asinh
,
+Atan
,
+and
+Sqrt
+have been corrected to match the definitions used in the C99 standard.
+
+The new Shuffle
function and corresponding
+Rand.Shuffle
method
+shuffle an input sequence.
+
+The new functions
+Round
+and
+RoundToEven
+round their arguments to the nearest floating-point integer;
+Round
rounds a half-integer to its larger integer neighbor (away from zero)
+while RoundToEven
rounds a half-integer to its even integer neighbor.
+
+The new functions
+Erfinv
+and
+Erfcinv
+compute the inverse error function and the
+inverse complementary error function.
+
+Reader
+now accepts parts with empty filename attributes.
+
+ParseMediaType
now discards
+invalid attribute values; previously it returned those values as empty strings.
+
+The Conn
and
+Listener
implementations
+in this package now guarantee that when Close
returns,
+the underlying file descriptor has been closed.
+(In earlier releases, if the Close
stopped pending I/O
+in other goroutines, the closing of the file descriptor could happen in one of those
+goroutines shortly after Close
returned.)
+
+TCPListener
and
+UnixListener
+now implement
+syscall.Conn
,
+to allow setting options on the underlying file descriptor
+using syscall.RawConn.Control
.
+
+The Conn
implementations returned by Pipe
+now support setting read and write deadlines.
+
+The IPConn.ReadMsgIP
,
+IPConn.WriteMsgIP
,
+UDPConn.ReadMsgUDP
,
+and
+UDPConn.WriteMsgUDP
,
+methods are now implemented on Windows.
+
+On the client side, an HTTP proxy (most commonly configured by
+ProxyFromEnvironment
)
+can now be specified as an https://
URL,
+meaning that the client connects to the proxy over HTTPS before issuing a standard, proxied HTTP request.
+(Previously, HTTP proxy URLs were required to begin with http://
or socks5://
.)
+
+On the server side, FileServer
and its single-file equivalent ServeFile
+now apply If-Range
checks to HEAD
requests.
+FileServer
also now reports directory read failures to the Server
's ErrorLog
.
+The content-serving handlers also now omit the Content-Type
header when serving zero-length content.
+
+ResponseWriter
's WriteHeader
method now panics
+if passed an invalid (non-3-digit) status code.
+
+
+The Server
will no longer add an implicit Content-Type when a Handler
does not write any output.
+
+Redirect
now sets the Content-Type
header before writing its HTTP response.
+
+ParseAddress
and
+ParseAddressList
+now support a variety of obsolete address formats.
+
+The Client
adds a new
+Noop
method,
+to test whether the server is still responding.
+It also now defends against possible SMTP injection in the inputs
+to the Hello
+and Verify
methods.
+
+ReadMIMEHeader
+now rejects any header that begins with a continuation (indented) header line.
+Previously a header with an indented first line was treated as if the first line
+were not indented.
+
+ResolveReference
+now preserves multiple leading slashes in the target URL.
+Previously it rewrote multiple leading slashes to a single slash,
+which resulted in the http.Client
+following certain redirects incorrectly.
+
+For example, this code's output has changed: +
++base, _ := url.Parse("http://host//path//to/page1") +target, _ := url.Parse("page2") +fmt.Println(base.ResolveReference(target)) ++
+Note the doubled slashes around path
.
+In Go 1.9 and earlier, the resolved URL was http://host/path//to/page2
:
+the doubled slash before path
was incorrectly rewritten
+to a single slash, while the doubled slash after path
was
+correctly preserved.
+Go 1.10 preserves both doubled slashes, resolving to http://host//path//to/page2
+as required by RFC 3986.
+
This change may break existing buggy programs that unintentionally
+construct a base URL with a leading doubled slash in the path and inadvertently
+depend on ResolveReference
to correct that mistake.
+For example, this can happen if code adds a host prefix
+like http://host/
to a path like /my/api
,
+resulting in a URL with a doubled slash: http://host//my/api
.
+
+UserInfo
's methods
+now treat a nil receiver as equivalent to a pointer to a zero UserInfo
.
+Previously, they panicked.
+
+File
adds new methods
+SetDeadline
,
+SetReadDeadline
,
+and
+SetWriteDeadline
+that allow setting I/O deadlines when the
+underlying file descriptor supports non-blocking I/O operations.
+The definition of these methods matches those in net.Conn
.
+If an I/O method fails due to missing a deadline, it will return a
+timeout error; the
+new IsTimeout
function
+reports whether an error represents a timeout.
+
+Also matching net.Conn
,
+File
's
+Close
method
+now guarantee that when Close
returns,
+the underlying file descriptor has been closed.
+(In earlier releases,
+if the Close
stopped pending I/O
+in other goroutines, the closing of the file descriptor could happen in one of those
+goroutines shortly after Close
returned.)
+
+On BSD, macOS, and Solaris systems,
+Chtimes
+now supports setting file times with nanosecond precision
+(assuming the underlying file system can represent them).
+
+The Copy
function now allows copying
+from a string into a byte array or byte slice, to match the
+built-in copy function.
+
+In structs, embedded pointers to unexported struct types were
+previously incorrectly reported with an empty PkgPath
+in the corresponding StructField,
+with the result that for those fields,
+and Value.CanSet
+incorrectly returned true and
+Value.Set
+incorrectly succeeded.
+The underlying metadata has been corrected;
+for those fields,
+CanSet
now correctly returns false
+and Set
now correctly panics.
+This may affect reflection-based unmarshalers
+that could previously unmarshal into such fields
+but no longer can.
+For example, see the encoding/json
notes.
+
+As noted above, the blocking and mutex profiles +now include symbol information so that they can be viewed without needing +the binary that generated them. +
+
+ParseUint
now returns
+the maximum magnitude integer of the appropriate size
+with any ErrRange
error, as it was already documented to do.
+Previously it returned 0 with ErrRange
errors.
+
+A new type
+Builder
is a replacement for
+bytes.Buffer
for the use case of
+accumulating text into a string
result.
+The Builder
's API is a restricted subset of bytes.Buffer
's
+that allows it to safely avoid making a duplicate copy of the data
+during the String
method.
+
+On Windows,
+the new SysProcAttr
field Token
,
+of type Token
allows the creation of a process that
+runs as another user during StartProcess
+(and therefore also during os.StartProcess
and
+exec.Cmd.Start
).
+The new function CreateProcessAsUser
+gives access to the underlying system call.
+
+On BSD, macOS, and Solaris systems, UtimesNano
+is now implemented.
+
+LoadLocation
now uses the directory
+or uncompressed zip file named by the $ZONEINFO
+environment variable before looking in the default system-specific list of
+known installation locations or in $GOROOT/lib/time/zoneinfo.zip
.
+
+The new function LoadLocationFromTZData
+allows conversion of IANA time zone file data to a Location
.
+
+The unicode
package and associated
+support throughout the system has been upgraded from Unicode 9.0 to
+Unicode 10.0,
+which adds 8,518 new characters, including four new scripts, one new property,
+a Bitcoin currency symbol, and 56 new emoji.
+
+ The latest Go release, version 1.11, arrives six months after Go 1.10. + Most of its changes are in the implementation of the toolchain, runtime, and libraries. + As always, the release maintains the Go 1 promise of compatibility. + We expect almost all Go programs to continue to compile and run as before. +
+ ++ There are no changes to the language specification. +
+ ++ As announced in the Go 1.10 release notes, Go 1.11 now requires + OpenBSD 6.2 or later, macOS 10.10 Yosemite or later, or Windows 7 or later; + support for previous versions of these operating systems has been removed. +
+ ++ Go 1.11 supports the upcoming OpenBSD 6.4 release. Due to changes in + the OpenBSD kernel, older versions of Go will not work on OpenBSD 6.4. +
+ ++ There are known issues with NetBSD on i386 hardware. +
+ +
+ The race detector is now supported on linux/ppc64le
+ and, to a lesser extent, on netbsd/amd64
. The NetBSD race detector support
+ has known issues.
+
+ The memory sanitizer (-msan
) is now supported on linux/arm64
.
+
+ The build modes c-shared
and c-archive
are now supported on
+ freebsd/amd64
.
+
+ On 64-bit MIPS systems, the new environment variable settings
+ GOMIPS64=hardfloat
(the default) and
+ GOMIPS64=softfloat
select whether to use
+ hardware instructions or software emulation for floating-point computations.
+ For 32-bit systems, the environment variable is still GOMIPS
,
+ as added in Go 1.10.
+
+ On soft-float ARM systems (GOARM=5
), Go now uses a more
+ efficient software floating point interface. This is transparent to
+ Go code, but ARM assembly that uses floating-point instructions not
+ guarded on GOARM will break and must be ported to
+ the new interface.
+
+ Go 1.11 on ARMv7 no longer requires a Linux kernel configured
+ with KUSER_HELPERS
. This setting is enabled in default
+ kernel configurations, but is sometimes disabled in stripped-down
+ configurations.
+
+ Go 1.11 adds an experimental port to WebAssembly
+ (js/wasm
).
+
+ Go programs currently compile to one WebAssembly module that
+ includes the Go runtime for goroutine scheduling, garbage
+ collection, maps, etc.
+ As a result, the resulting size is at minimum around
+ 2 MB, or 500 KB compressed. Go programs can call into JavaScript
+ using the new experimental
+ syscall/js
package.
+ Binary size and interop with other languages has not yet been a
+ priority but may be addressed in future releases.
+
+ As a result of the addition of the new GOOS
value
+ "js
" and GOARCH
value "wasm
",
+ Go files named *_js.go
or *_wasm.go
will
+ now be ignored by Go
+ tools except when those GOOS/GOARCH values are being used.
+ If you have existing filenames matching those patterns, you will need to rename them.
+
+ More information can be found on the + WebAssembly wiki page. +
+ +
+ The main Go compiler does not yet support the RISC-V architecture
+ but we've reserved the GOARCH
values
+ "riscv
" and "riscv64
", as used by Gccgo,
+ which does support RISC-V. This means that Go files
+ named *_riscv.go
will now also
+ be ignored by Go
+ tools except when those GOOS/GOARCH values are being used.
+
+ Go 1.11 adds preliminary support for a new concept called “modules,” + an alternative to GOPATH with integrated support for versioning and + package distribution. + Using modules, developers are no longer confined to working inside GOPATH, + version dependency information is explicit yet lightweight, + and builds are more reliable and reproducible. +
+ +
+ Module support is considered experimental.
+ Details are likely to change in response to feedback from Go 1.11 users,
+ and we have more tools planned.
+ Although the details of module support may change, projects that convert
+ to modules using Go 1.11 will continue to work with Go 1.12 and later.
+ If you encounter bugs using modules,
+ please file issues
+ so we can fix them. For more information, see the
+ go
command documentation.
+
+ Because Go module support assigns special meaning to the
+ @
symbol in command line operations,
+ the go
command now disallows the use of
+ import paths containing @
symbols.
+ Such import paths were never allowed by go
get
,
+ so this restriction can only affect users building
+ custom GOPATH trees by other means.
+
+ The new package
+ golang.org/x/tools/go/packages
+ provides a simple API for locating and loading packages of Go source code.
+ Although not yet part of the standard library, for many tasks it
+ effectively replaces the go/build
+ package, whose API is unable to fully support modules.
+ Because it runs an external query command such as
+ go list
+ to obtain information about Go packages, it enables the construction of
+ analysis tools that work equally well with alternative build systems
+ such as Bazel
+ and Buck.
+
+ Go 1.11 will be the last release to support setting the environment
+ variable GOCACHE=off
to disable the
+ build cache,
+ introduced in Go 1.10.
+ Starting in Go 1.12, the build cache will be required,
+ as a step toward eliminating $GOPATH/pkg
.
+ The module and package loading support described above
+ already require that the build cache be enabled.
+ If you have disabled the build cache to avoid problems you encountered,
+ please file an issue to let us know about them.
+
+ More functions are now eligible for inlining by default, including
+ functions that call panic
.
+
+ The compiler toolchain now supports column information + in line + directives. +
+ +
+ A new package export data format has been introduced.
+ This should be transparent to end users, except for speeding up
+ build times for large Go projects.
+ If it does cause problems, it can be turned off again by
+ passing -gcflags=all=-iexport=false
to
+ the go
tool when building a binary.
+
+ The compiler now rejects unused variables declared in a type switch
+ guard, such as x
in the following example:
+
+func f(v interface{}) { + switch x := v.(type) { + } +} ++
+ This was already rejected by both gccgo
+ and go/types.
+
+ The assembler for amd64
now accepts AVX512 instructions.
+
+ The compiler now produces significantly more accurate debug
+ information for optimized binaries, including variable location
+ information, line numbers, and breakpoint locations.
+
+ This should make it possible to debug binaries
+ compiled without -N
-l
.
+
+ There are still limitations to the quality of the debug information,
+ some of which are fundamental, and some of which will continue to
+ improve with future releases.
+
+ DWARF sections are now compressed by default because of the expanded
+ and more accurate debug information produced by the compiler.
+
+ This is transparent to most ELF tools (such as debuggers on Linux
+ and *BSD) and is supported by the Delve debugger on all platforms,
+ but has limited support in the native tools on macOS and Windows.
+
+ To disable DWARF compression,
+ pass -ldflags=-compressdwarf=false
to
+ the go
tool when building a binary.
+
+ Go 1.11 adds experimental support for calling Go functions from
+ within a debugger.
+
+ This is useful, for example, to call String
methods
+ when paused at a breakpoint.
+
+ This is currently only supported by Delve (version 1.1.0 and up).
+
+ Since Go 1.10, the go
test
command runs
+ go
vet
on the package being tested,
+ to identify problems before running the test. Since vet
+ typechecks the code with go/types
+ before running, tests that do not typecheck will now fail.
+
+ In particular, tests that contain an unused variable inside a
+ closure compiled with Go 1.10, because the Go compiler incorrectly
+ accepted them (Issue #3059),
+ but will now fail, since go/types
correctly reports an
+ "unused variable" error in this case.
+
+ The -memprofile
flag
+ to go
test
now defaults to the
+ "allocs" profile, which records the total bytes allocated since the
+ test began (including garbage-collected bytes).
+
+ The go
vet
+ command now reports a fatal error when the package under analysis
+ does not typecheck. Previously, a type checking error simply caused
+ a warning to be printed, and vet
to exit with status 1.
+
+ Additionally, go
vet
+ has become more robust when format-checking printf
wrappers.
+ Vet now detects the mistake in this example:
+
+func wrapper(s string, args ...interface{}) { + fmt.Printf(s, args...) +} + +func main() { + wrapper("%s", 42) +} ++ +
+ With the new runtime/trace
+ package's user
+ annotation API, users can record application-level information
+ in execution traces and create groups of related goroutines.
+ The go
tool
trace
+ command visualizes this information in the trace view and the new
+ user task/region analysis page.
+
+Since Go 1.10, cgo has translated some C pointer types to the Go
+type uintptr
. These types include
+the CFTypeRef
hierarchy in Darwin's CoreFoundation
+framework and the jobject
hierarchy in Java's JNI
+interface. In Go 1.11, several improvements have been made to the code
+that detects these types. Code that uses these types may need some
+updating. See the Go 1.10 release notes for
+details.
+
+ The environment variable GOFLAGS
may now be used
+ to set default flags for the go
command.
+ This is useful in certain situations.
+ Linking can be noticeably slower on underpowered systems due to DWARF,
+ and users may want to set -ldflags=-w
by default.
+ For modules, some users and CI systems will want vendoring always,
+ so they should set -mod=vendor
by default.
+ For more information, see the go
+ command documentation.
+
+ Go 1.11 will be the last release to support godoc
's command-line interface.
+ In future releases, godoc
will only be a web server. Users should use
+ go
doc
for command-line help output instead.
+
+ The godoc
web server now shows which version of Go introduced
+ new API features. The initial Go version of types, funcs, and methods are shown
+ right-aligned. For example, see UserCacheDir
, with "1.11"
+ on the right side. For struct fields, inline comments are added when the struct field was
+ added in a Go version other than when the type itself was introduced.
+ For a struct field example, see
+ ClientTrace.Got1xxResponse
.
+
+ One minor detail of the default formatting of Go source code has changed. + When formatting expression lists with inline comments, the comments were + aligned according to a heuristic. + However, in some cases the alignment would be split up too easily, or + introduce too much whitespace. + The heuristic has been changed to behave better for human-written code. +
+ +
+ Note that these kinds of minor updates to gofmt are expected from time to
+ time.
+ In general, systems that need consistent formatting of Go source code should
+ use a specific version of the gofmt
binary.
+ See the go/format package documentation for more
+ information.
+
+
+ The go
run
+ command now allows a single import path, a directory name or a
+ pattern matching a single package.
+ This allows go
run
pkg
or go
run
dir
, most importantly go
run
.
+
+ The runtime now uses a sparse heap layout so there is no longer a
+ limit to the size of the Go heap (previously, the limit was 512GiB).
+ This also fixes rare "address space conflict" failures in mixed Go/C
+ binaries or binaries compiled with -race
.
+
+ On macOS and iOS, the runtime now uses libSystem.dylib
instead of
+ calling the kernel directly. This should make Go binaries more
+ compatible with future versions of macOS and iOS.
+ The syscall package still makes direct
+ system calls; fixing this is planned for a future release.
+
+As always, the changes are so general and varied that precise +statements about performance are difficult to make. Most programs +should run a bit faster, due to better generated code and +optimizations in the core library. +
+ +
+There were multiple performance changes to the math/big
+package as well as many changes across the tree specific to GOARCH=arm64
.
+
+ The compiler now optimizes map clearing operations of the form: +
++for k := range m { + delete(m, k) +} ++ +
+ The compiler now optimizes slice extension of the form
+ append(s,
make([]T,
n)...)
.
+
+ The compiler now performs significantly more aggressive bounds-check
+ and branch elimination. Notably, it now recognizes transitive
+ relations, so if i<j
and j<len(s)
,
+ it can use these facts to eliminate the bounds check
+ for s[i]
. It also understands simple arithmetic such
+ as s[i-10]
and can recognize more inductive cases in
+ loops. Furthermore, the compiler now uses bounds information to more
+ aggressively optimize shift operations.
+
+ All of the changes to the standard library are minor. +
+ ++ As always, there are various minor changes and updates to the library, + made with the Go 1 promise of compatibility + in mind. +
+ + + + + + +
+ Certain crypto operations, including
+ ecdsa.Sign
,
+ rsa.EncryptPKCS1v15
and
+ rsa.GenerateKey
,
+ now randomly read an extra byte of randomness to ensure tests don't rely on internal behavior.
+
+ The new function NewGCMWithTagSize
+ implements Galois Counter Mode with non-standard tag lengths for compatibility with existing cryptosystems.
+
+ PublicKey
now implements a
+ Size
method that
+ returns the modulus size in bytes.
+
+ ConnectionState
's new
+ ExportKeyingMaterial
+ method allows exporting keying material bound to the
+ connection according to RFC 5705.
+
+ The deprecated, legacy behavior of treating the CommonName
field as
+ a hostname when no Subject Alternative Names are present is now disabled when the CN is not a
+ valid hostname.
+ The CommonName
can be completely ignored by adding the experimental value
+ x509ignoreCN=1
to the GODEBUG
environment variable.
+ When the CN is ignored, certificates without SANs validate under chains with name constraints
+ instead of returning NameConstraintsWithoutSANs
.
+
+ Extended key usage restrictions are again checked only if they appear in the KeyUsages
+ field of VerifyOptions
, instead of always being checked.
+ This matches the behavior of Go 1.9 and earlier.
+
+ The value returned by SystemCertPool
+ is now cached and might not reflect system changes between invocations.
+
+ Marshal
and Unmarshal
+ now support "private" class annotations for fields.
+
+ The decoder now consistently
+ returns io.ErrUnexpectedEOF
for an incomplete
+ chunk. Previously it would return io.EOF
in some
+ cases.
+
+ The Reader
now rejects attempts to set
+ the Comma
+ field to a double-quote character, as double-quote characters
+ already have a special meaning in CSV.
+
+ The package has changed its behavior when a typed interface
+ value is passed to an implicit escaper function. Previously such
+ a value was written out as (an escaped form)
+ of <nil>
. Now such values are ignored, just
+ as an untyped nil
value is (and always has been)
+ ignored.
+
+ Non-looping animated GIFs are now supported. They are denoted by having a
+ LoopCount
of -1.
+
+ The TempFile
+ function now supports specifying where the random characters in
+ the filename are placed. If the prefix
argument
+ includes a "*
", the random string replaces the
+ "*
". For example, a prefix
argument of "myname.*.bat
" will
+ result in a random filename such as
+ "myname.123456.bat
". If no "*
" is
+ included the old behavior is retained, and the random digits are
+ appended to the end.
+
+ ModInverse
now returns nil when g and n are not relatively prime. The result was previously undefined.
+
+ The handling of form-data with missing/empty file names has been
+ restored to the behavior in Go 1.9: in the
+ Form
for
+ the form-data part the value is available in
+ the Value
field rather than the File
+ field. In Go releases 1.10 through 1.10.3 a form-data part with
+ a missing/empty file name and a non-empty "Content-Type" field
+ was stored in the File
field. This change was a
+ mistake in 1.10 and has been reverted to the 1.9 behavior.
+
+ To support invalid input found in the wild, the package now + permits non-ASCII bytes but does not validate their encoding. +
+ +
+ The new ListenConfig
type and the new
+ Dialer.Control
field permit
+ setting socket options before accepting and creating connections, respectively.
+
+ The syscall.RawConn
Read
+ and Write
methods now work correctly on Windows.
+
+ The net
package now automatically uses the
+ splice
system call
+ on Linux when copying data between TCP connections in
+ TCPConn.ReadFrom
, as called by
+ io.Copy
. The result is faster, more efficient TCP proxying.
+
+ The TCPConn.File
,
+ UDPConn.File
,
+ UnixConn.File
,
+ and IPConn.File
+ methods no longer put the returned *os.File
into
+ blocking mode.
+
+ The Transport
type has a
+ new MaxConnsPerHost
+ option that permits limiting the maximum number of connections
+ per host.
+
+ The Cookie
type has a new
+ SameSite
field
+ (of new type also named
+ SameSite
) to represent the new cookie attribute recently supported by most browsers.
+ The net/http
's Transport
does not use the SameSite
+ attribute itself, but the package supports parsing and serializing the
+ attribute for browsers to use.
+
+ It is no longer allowed to reuse a Server
+ after a call to
+ Shutdown
or
+ Close
. It was never officially supported
+ in the past and had often surprising behavior. Now, all future calls to the server's Serve
+ methods will return errors after a shutdown or close.
+
+ The constant StatusMisdirectedRequest
is now defined for HTTP status code 421.
+
+ The HTTP server will no longer cancel contexts or send on
+ CloseNotifier
+ channels upon receiving pipelined HTTP/1.1 requests. Browsers do
+ not use HTTP pipelining, but some clients (such as
+ Debian's apt
) may be configured to do so.
+
+ ProxyFromEnvironment
, which is used by the
+ DefaultTransport
, now
+ supports CIDR notation and ports in the NO_PROXY
environment variable.
+
+ The
+ ReverseProxy
+ has a new
+ ErrorHandler
+ option to permit changing how errors are handled.
+
+ The ReverseProxy
now also passes
+ "TE:
trailers
" request headers
+ through to the backend, as required by the gRPC protocol.
+
+ The new UserCacheDir
function
+ returns the default root directory to use for user-specific cached data.
+
+ The new ModeIrregular
+ is a FileMode
bit to represent
+ that a file is not a regular file, but nothing else is known about it, or that
+ it's not a socket, device, named pipe, symlink, or other file type for which
+ Go has a defined mode bit.
+
+ Symlink
now works
+ for unprivileged users on Windows 10 on machines with Developer
+ Mode enabled.
+
+ When a non-blocking descriptor is passed
+ to NewFile
, the
+ resulting *File
will be kept in non-blocking
+ mode. This means that I/O for that *File
will use
+ the runtime poller rather than a separate thread, and that
+ the SetDeadline
+ methods will work.
+
+ The new Ignored
function reports
+ whether a signal is currently ignored.
+
+ The os/user
package can now be built in pure Go
+ mode using the build tag "osusergo
",
+ independent of the use of the environment
+ variable CGO_ENABLED=0
. Previously the only way to use
+ the package's pure Go implementation was to disable cgo
+ support across the entire program.
+
+ Setting the GODEBUG=tracebackancestors=N
+ environment variable now extends tracebacks with the stacks at
+ which goroutines were created, where N limits the
+ number of ancestor goroutines to report.
+
+ This release adds a new "allocs" profile type that profiles
+ total number of bytes allocated since the program began
+ (including garbage-collected bytes). This is identical to the
+ existing "heap" profile viewed in -alloc_space
mode.
+ Now go test -memprofile=...
reports an "allocs" profile
+ instead of "heap" profile.
+
+ The mutex profile now includes reader/writer contention
+ for RWMutex
.
+ Writer/writer contention was already included in the mutex
+ profile.
+
+ On Windows, several fields were changed from uintptr
to a new
+ Pointer
+ type to avoid problems with Go's garbage collector. The same change was made
+ to the golang.org/x/sys/windows
+ package. For any code affected, users should first migrate away from the syscall
+ package to the golang.org/x/sys/windows
package, and then change
+ to using the Pointer
, while obeying the
+ unsafe.Pointer
conversion rules.
+
+ On Linux, the flags
parameter to
+ Faccessat
+ is now implemented just as in glibc. In earlier Go releases the
+ flags parameter was ignored.
+
+ On Linux, the flags
parameter to
+ Fchmodat
+ is now validated. Linux's fchmodat
doesn't support the flags
parameter
+ so we now mimic glibc's behavior and return an error if it's non-zero.
+
+ The Scanner.Scan
method now returns
+ the RawString
token
+ instead of String
+ for raw string literals.
+
+ Modifying template variables via assignments is now permitted via the =
token:
+
+ {{"{{"}} $v := "init" {{"}}"}} + {{"{{"}} if true {{"}}"}} + {{"{{"}} $v = "changed" {{"}}"}} + {{"{{"}} end {{"}}"}} + v: {{"{{"}} $v {{"}}"}} {{"{{"}}/* "changed" */{{"}}"}}+ +
+ In previous versions untyped nil
values passed to
+ template functions were ignored. They are now passed as normal
+ arguments.
+
+ Parsing of timezones denoted by sign and offset is now
+ supported. In previous versions, numeric timezone names
+ (such as +03
) were not considered valid, and only
+ three-letter abbreviations (such as MST
) were accepted
+ when expecting a timezone name.
+
+ The latest Go release, version 1.12, arrives six months after Go 1.11. + Most of its changes are in the implementation of the toolchain, runtime, and libraries. + As always, the release maintains the Go 1 promise of compatibility. + We expect almost all Go programs to continue to compile and run as before. +
+ ++ There are no changes to the language specification. +
+ +
+ The race detector is now supported on linux/arm64
.
+
+ Go 1.12 is the last release that is supported on FreeBSD 10.x, which has + already reached end-of-life. Go 1.13 will require FreeBSD 11.2+ or FreeBSD + 12.0+. + FreeBSD 12.0+ requires a kernel with the COMPAT_FREEBSD11 option set (this is the default). +
+ +
+ cgo is now supported on linux/ppc64
.
+
+ hurd
is now a recognized value for GOOS
, reserved
+ for the GNU/Hurd system for use with gccgo
.
+
+ Go's new windows/arm
port supports running Go on Windows 10
+ IoT Core on 32-bit ARM chips such as the Raspberry Pi 3.
+
+ Go now supports AIX 7.2 and later on POWER8 architectures (aix/ppc64
). External linking, cgo, pprof and the race detector aren't yet supported.
+
+ Go 1.12 is the last release that will run on macOS 10.10 Yosemite. + Go 1.13 will require macOS 10.11 El Capitan or later. +
+ +
+ libSystem
is now used when making syscalls on Darwin,
+ ensuring forward-compatibility with future versions of macOS and iOS.
+
+ The switch to libSystem
triggered additional App Store
+ checks for private API usage. Since it is considered private,
+ syscall.Getdirentries
now always fails with
+ ENOSYS
on iOS.
+ Additionally, syscall.Setrlimit
+ reports invalid
argument
in places where it historically
+ succeeded. These consequences are not specific to Go and users should expect
+ behavioral parity with libSystem
's implementation going forward.
+
go tool vet
no longer supported
+ The go vet
command has been rewritten to serve as the
+ base for a range of different source code analysis tools. See
+ the golang.org/x/tools/go/analysis
+ package for details. A side-effect is that go tool vet
+ is no longer supported. External tools that use go tool
+ vet
must be changed to use go
+ vet
. Using go vet
instead of go tool
+ vet
should work with all supported versions of Go.
+
+ As part of this change, the experimental -shadow
option
+ is no longer available with go vet
. Checking for
+ variable shadowing may now be done using
+
+go get -u golang.org/x/tools/go/analysis/passes/shadow/cmd/shadow +go vet -vettool=$(which shadow) ++ + +
+The Go tour is no longer included in the main binary distribution. To
+run the tour locally, instead of running go
tool
tour
,
+manually install it:
+
+go get -u golang.org/x/tour +tour ++ + +
+ The build cache is now
+ required as a step toward eliminating
+ $GOPATH/pkg
. Setting the environment variable
+ GOCACHE=off
will cause go
commands that write to the
+ cache to fail.
+
+ Go 1.12 is the last release that will support binary-only packages. +
+ +
+ Go 1.12 will translate the C type EGLDisplay
to the Go type uintptr
.
+ This change is similar to how Go 1.10 and newer treats Darwin's CoreFoundation
+ and Java's JNI types. See the
+ cgo documentation
+ for more information.
+
+ Mangled C names are no longer accepted in packages that use Cgo. Use the Cgo
+ names instead. For example, use the documented cgo name C.char
+ rather than the mangled name _Ctype_char
that cgo generates.
+
+ When GO111MODULE
is set to on
, the go
+ command now supports module-aware operations outside of a module directory,
+ provided that those operations do not need to resolve import paths relative to
+ the current directory or explicitly edit the go.mod
file.
+ Commands such as go
get
,
+ go
list
, and
+ go
mod
download
behave as if in a
+ module with initially-empty requirements.
+ In this mode, go
env
GOMOD
reports
+ the system's null device (/dev/null
or NUL
).
+
+ go
commands that download and extract modules are now safe to
+ invoke concurrently.
+ The module cache (GOPATH/pkg/mod
) must reside in a filesystem that
+ supports file locking.
+
+ The go
directive in a go.mod
file now indicates the
+ version of the language used by the files within that module.
+ It will be set to the current release
+ (go
1.12
) if no existing version is
+ present.
+ If the go
directive for a module specifies a
+ version newer than the toolchain in use, the go
command
+ will attempt to build the packages regardless, and will note the mismatch only if
+ that build fails.
+
+ This changed use of the go
directive means that if you
+ use Go 1.12 to build a module, thus recording go 1.12
+ in the go.mod
file, you will get an error when
+ attempting to build the same module with Go 1.11 through Go 1.11.3.
+ Go 1.11.4 or later will work fine, as will releases older than Go 1.11.
+ If you must use Go 1.11 through 1.11.3, you can avoid the problem by
+ setting the language version to 1.11, using the Go 1.12 go tool,
+ via go mod edit -go=1.11
.
+
+ When an import cannot be resolved using the active modules,
+ the go
command will now try to use the modules mentioned in the
+ main module's replace
directives before consulting the module
+ cache and the usual network sources.
+ If a matching replacement is found but the replace
directive does
+ not specify a version, the go
command uses a pseudo-version
+ derived from the zero time.Time
(such
+ as v0.0.0-00010101000000-000000000000
).
+
+ The compiler's live variable analysis has improved. This may mean that
+ finalizers will be executed sooner in this release than in previous
+ releases. If that is a problem, consider the appropriate addition of a
+ runtime.KeepAlive
call.
+
+ More functions are now eligible for inlining by default, including
+ functions that do nothing but call another function.
+ This extra inlining makes it additionally important to use
+ runtime.CallersFrames
+ instead of iterating over the result of
+ runtime.Callers
directly.
+
+// Old code which no longer works correctly (it will miss inlined call frames). +var pcs [10]uintptr +n := runtime.Callers(1, pcs[:]) +for _, pc := range pcs[:n] { + f := runtime.FuncForPC(pc) + if f != nil { + fmt.Println(f.Name()) + } +} ++
+// New code which will work correctly. +var pcs [10]uintptr +n := runtime.Callers(1, pcs[:]) +frames := runtime.CallersFrames(pcs[:n]) +for { + frame, more := frames.Next() + fmt.Println(frame.Function) + if !more { + break + } +} ++ + +
+ Wrappers generated by the compiler to implement method expressions
+ are no longer reported
+ by runtime.CallersFrames
+ and runtime.Stack
. They
+ are also not printed in panic stack traces.
+
+ This change aligns the gc
toolchain to match
+ the gccgo
toolchain, which already elided such wrappers
+ from stack traces.
+
+ Clients of these APIs might need to adjust for the missing
+ frames. For code that must interoperate between 1.11 and 1.12
+ releases, you can replace the method expression x.M
+ with the function literal func (...) { x.M(...) }
.
+
+ The compiler now accepts a -lang
flag to set the Go language
+ version to use. For example, -lang=go1.8
causes the compiler to
+ emit an error if the program uses type aliases, which were added in Go 1.9.
+ Language changes made before Go 1.12 are not consistently enforced.
+
+ The compiler toolchain now uses different conventions to call Go + functions and assembly functions. This should be invisible to users, + except for calls that simultaneously cross between Go and + assembly and cross a package boundary. If linking results + in an error like "relocation target not defined for ABIInternal (but + is defined for ABI0)", please refer to the + compatibility section + of the ABI design document. +
+ ++ There have been many improvements to the DWARF debug information + produced by the compiler, including improvements to argument + printing and variable location information. +
+ +
+ Go programs now also maintain stack frame pointers on linux/arm64
+ for the benefit of profiling tools like perf
. The frame pointer
+ maintenance has a small run-time overhead that varies but averages around 3%.
+ To build a toolchain that does not use frame pointers, set
+ GOEXPERIMENT=noframepointer
when running make.bash
.
+
+ The obsolete "safe" compiler mode (enabled by the -u
gcflag) has been removed.
+
godoc
and go
doc
+ In Go 1.12, godoc
no longer has a command-line interface and
+ is only a web server. Users should use go
doc
+ for command-line help output instead. Go 1.12 is the last release that will
+ include the godoc
webserver; in Go 1.13 it will be available
+ via go
get
.
+
+ go
doc
now supports the -all
flag,
+ which will cause it to print all exported APIs and their documentation,
+ as the godoc
command line used to do.
+
+ go
doc
also now includes the -src
flag,
+ which will show the target's source code.
+
+ The trace tool now supports plotting mutator utilization curves, + including cross-references to the execution trace. These are useful + for analyzing the impact of the garbage collector on application + latency and throughput. +
+ +
+ On arm64
, the platform register was renamed from
+ R18
to R18_PLATFORM
to prevent accidental
+ use, as the OS could choose to reserve this register.
+
+ Go 1.12 significantly improves the performance of sweeping when a + large fraction of the heap remains live. This reduces allocation + latency immediately following a garbage collection. +
+ ++ The Go runtime now releases memory back to the operating system more + aggressively, particularly in response to large allocations that + can't reuse existing heap space. +
+ ++ The Go runtime's timer and deadline code is faster and scales better + with higher numbers of CPUs. In particular, this improves the + performance of manipulating network connection deadlines. +
+ +
+ On Linux, the runtime now uses MADV_FREE
to release unused
+ memory. This is more efficient but may result in higher reported
+ RSS. The kernel will reclaim the unused data when it is needed.
+ To revert to the Go 1.11 behavior (MADV_DONTNEED
), set the
+ environment variable GODEBUG=madvdontneed=1
.
+
+ Adding cpu.extension=off to the + GODEBUG environment + variable now disables the use of optional CPU instruction + set extensions in the standard library and runtime. This is not + yet supported on Windows. +
+ ++ Go 1.12 improves the accuracy of memory profiles by fixing + overcounting of large heap allocations. +
+ +
+ Tracebacks, runtime.Caller
,
+ and runtime.Callers
no longer include
+ compiler-generated initialization functions. Doing a traceback
+ during the initialization of a global variable will now show a
+ function named PKG.init.ializers
.
+
+ Go 1.12 adds opt-in support for TLS 1.3 in the crypto/tls
package as
+ specified by RFC 8446. It can
+ be enabled by adding the value tls13=1
to the GODEBUG
+ environment variable. It will be enabled by default in Go 1.13.
+
+ To negotiate TLS 1.3, make sure you do not set an explicit MaxVersion
in
+ Config
and run your program with
+ the environment variable GODEBUG=tls13=1
set.
+
+ All TLS 1.2 features except TLSUnique
in
+ ConnectionState
+ and renegotiation are available in TLS 1.3 and provide equivalent or
+ better security and performance. Note that even though TLS 1.3 is backwards
+ compatible with previous versions, certain legacy systems might not work
+ correctly when attempting to negotiate it. RSA certificate keys too small
+ to be secure (including 512-bit keys) will not work with TLS 1.3.
+
+ TLS 1.3 cipher suites are not configurable. All supported cipher suites are
+ safe, and if PreferServerCipherSuites
is set in
+ Config
the preference order
+ is based on the available hardware.
+
+ Early data (also called "0-RTT mode") is not currently supported as a + client or server. Additionally, a Go 1.12 server does not support skipping + unexpected early data if a client sends it. Since TLS 1.3 0-RTT mode + involves clients keeping state regarding which servers support 0-RTT, + a Go 1.12 server cannot be part of a load-balancing pool where some other + servers do support 0-RTT. If switching a domain from a server that supported + 0-RTT to a Go 1.12 server, 0-RTT would have to be disabled for at least the + lifetime of the issued session tickets before the switch to ensure + uninterrupted operation. +
+ +
+ In TLS 1.3 the client is the last one to speak in the handshake, so if it causes
+ an error to occur on the server, it will be returned on the client by the first
+ Read
, not by
+ Handshake
. For
+ example, that will be the case if the server rejects the client certificate.
+ Similarly, session tickets are now post-handshake messages, so are only
+ received by the client upon its first
+ Read
.
+
+ As always, there are various minor changes and updates to the library, + made with the Go 1 promise of compatibility + in mind. +
+ + + +
+ Reader
's UnreadRune
and
+ UnreadByte
methods will now return an error
+ if they are called after Peek
.
+
+ The new function ReplaceAll
returns a copy of
+ a byte slice with all non-overlapping instances of a value replaced by another.
+
+ A pointer to a zero-value Reader
is now
+ functionally equivalent to NewReader
(nil)
.
+ Prior to Go 1.12, the former could not be used as a substitute for the latter in all cases.
+
+ A warning will now be printed to standard error the first time
+ Reader.Read
is blocked for more than 60 seconds waiting
+ to read entropy from the kernel.
+
+ On FreeBSD, Reader
now uses the getrandom
+ system call if available, /dev/urandom
otherwise.
+
+ This release removes the assembly implementations, leaving only + the pure Go version. The Go compiler generates code that is + either slightly better or slightly worse, depending on the exact + CPU. RC4 is insecure and should only be used for compatibility + with legacy systems. +
+ +
+ If a client sends an initial message that does not look like TLS, the server
+ will no longer reply with an alert, and it will expose the underlying
+ net.Conn
in the new field Conn
of
+ RecordHeaderError
.
+
+ A query cursor can now be obtained by passing a
+ *Rows
+ value to the Row.Scan
method.
+
+ Maps are now printed in key-sorted order to ease testing. The ordering rules are: +
reflect.Type
describing the concrete type
+ and then by concrete value as described in the previous rules.
+
+ When printing maps, non-reflexive key values like NaN
were previously
+ displayed as <nil>
. As of this release, the correct values are printed.
+
+ To address some outstanding issues in cmd/doc
,
+ this package has a new Mode
bit,
+ PreserveAST
, which controls whether AST data is cleared.
+
+ The File
type has a new
+ LineStart
field,
+ which returns the position of the start of a given line. This is especially useful
+ in programs that occasionally handle non-Go files, such as assembly, but wish to use
+ the token.Pos
mechanism to identify file positions.
+
+ The RegisterFormat
function is now safe for concurrent use.
+
+ Paletted images with fewer than 16 colors now encode to smaller outputs. +
+ +
+ The new StringWriter
interface wraps the
+ WriteString
function.
+
+ The functions
+ Sin
,
+ Cos
,
+ Tan
,
+ and Sincos
now
+ apply Payne-Hanek range reduction to huge arguments. This
+ produces more accurate answers, but they will not be bit-for-bit
+ identical with the results in earlier releases.
+
+ New extended precision operations Add
, Sub
, Mul
, and Div
are available in uint
, uint32
, and uint64
versions.
+
+ The
+ Dialer.DualStack
setting is now ignored and deprecated;
+ RFC 6555 Fast Fallback ("Happy Eyeballs") is now enabled by default. To disable, set
+ Dialer.FallbackDelay
to a negative value.
+
+ Similarly, TCP keep-alives are now enabled by default if
+ Dialer.KeepAlive
is zero.
+ To disable, set it to a negative value.
+
+ On Linux, the splice
system call is now used when copying from a
+ UnixConn
to a
+ TCPConn
.
+
+ The HTTP server now rejects misdirected HTTP requests to HTTPS servers with a plaintext "400 Bad Request" response. +
+ +
+ The new Client.CloseIdleConnections
+ method calls the Client
's underlying Transport
's CloseIdleConnections
+ if it has one.
+
+ The Transport
no longer rejects HTTP responses which declare
+ HTTP Trailers but don't use chunked encoding. Instead, the declared trailers are now just ignored.
+
+ The Transport
no longer handles MAX_CONCURRENT_STREAMS
values
+ advertised from HTTP/2 servers as strictly as it did during Go 1.10 and Go 1.11. The default behavior is now back
+ to how it was in Go 1.9: each connection to a server can have up to MAX_CONCURRENT_STREAMS
requests
+ active and then new TCP connections are created as needed. In Go 1.10 and Go 1.11 the http2
package
+ would block and wait for requests to finish instead of creating new connections.
+ To get the stricter behavior back, import the
+ golang.org/x/net/http2
package
+ directly and set
+ Transport.StrictMaxConcurrentStreams
to
+ true
.
+
+ Parse
,
+ ParseRequestURI
,
+ and
+ URL.Parse
+ now return an
+ error for URLs containing ASCII control characters, which includes NULL,
+ tab, and newlines.
+
+ The ReverseProxy
now automatically
+ proxies WebSocket requests.
+
+ The new ProcessState.ExitCode
method
+ returns the process's exit code.
+
+ ModeCharDevice
has been added to the ModeType
bitmask, allowing for
+ ModeDevice | ModeCharDevice
to be recovered when masking a
+ FileMode
with ModeType
.
+
+ The new function UserHomeDir
returns the
+ current user's home directory.
+
+ RemoveAll
now supports paths longer than 4096 characters
+ on most Unix systems.
+
+ File.Sync
now uses F_FULLFSYNC
on macOS
+ to correctly flush the file contents to permanent storage.
+ This may cause the method to run more slowly than in previous releases.
+
+ File
now supports
+ a SyscallConn
+ method returning
+ a syscall.RawConn
+ interface value. This may be used to invoke system-specific
+ operations on the underlying file descriptor.
+
+ The IsAbs
function now returns true when passed
+ a reserved filename on Windows such as NUL
.
+ List of reserved names.
+
+ A new MapIter
type is
+ an iterator for ranging over a map. This type is exposed through the
+ Value
type's new
+ MapRange
method.
+ This follows the same iteration semantics as a range statement, with Next
+ to advance the iterator, and Key
/Value
to access each entry.
+
+ Copy
is no longer necessary
+ to avoid lock contention, so it has been given a partial deprecation comment.
+ Copy
+ may still be appropriate if the reason for its use is to make two copies with
+ different Longest
settings.
+
+ A new BuildInfo
type
+ exposes the build information read from the running binary, available only in
+ binaries built with module support. This includes the main package path, main
+ module information, and the module dependencies. This type is given through the
+ ReadBuildInfo
function
+ on BuildInfo
.
+
+ The new function ReplaceAll
returns a copy of
+ a string with all non-overlapping instances of a value replaced by another.
+
+ A pointer to a zero-value Reader
is now
+ functionally equivalent to NewReader
(nil)
.
+ Prior to Go 1.12, the former could not be used as a substitute for the latter in all cases.
+
+ The new Builder.Cap
method returns the capacity of the builder's underlying byte slice.
+
+ The character mapping functions Map
,
+ Title
,
+ ToLower
,
+ ToLowerSpecial
,
+ ToTitle
,
+ ToTitleSpecial
,
+ ToUpper
, and
+ ToUpperSpecial
+ now always guarantee to return valid UTF-8. In earlier releases, if the input was invalid UTF-8 but no character replacements
+ needed to be applied, these routines incorrectly returned the invalid UTF-8 unmodified.
+
+ 64-bit inodes are now supported on FreeBSD 12. Some types have been adjusted accordingly. +
+ +
+ The Unix socket
+ (AF_UNIX
)
+ address family is now supported for compatible versions of Windows.
+
+ The new function Syscall18
+ has been introduced for Windows, allowing for calls with up to 18 arguments.
+
+
+ The Callback
type and NewCallback
function have been renamed;
+ they are now called
+ Func
and
+ FuncOf
, respectively.
+ This is a breaking change, but WebAssembly support is still experimental
+ and not yet subject to the
+ Go 1 compatibility promise. Any code using the
+ old names will need to be updated.
+
+ If a type implements the new
+ Wrapper
+ interface,
+ ValueOf
+ will use it to return the JavaScript value for that type.
+
+ The meaning of the zero
+ Value
+ has changed. It now represents the JavaScript undefined
value
+ instead of the number zero.
+ This is a breaking change, but WebAssembly support is still experimental
+ and not yet subject to the
+ Go 1 compatibility promise. Any code relying on
+ the zero Value
+ to mean the number zero will need to be updated.
+
+ The new
+ Value.Truthy
+ method reports the
+ JavaScript "truthiness"
+ of a given value.
+
+ The -benchtime
flag now supports setting an explicit iteration count instead of a time when the value ends with an "x
". For example, -benchtime=100x
runs the benchmark 100 times.
+
+ When executing a template, long context values are no longer truncated in errors. +
+
+ executing "tmpl" at <.very.deep.context.v...>: map has no entry for key "notpresent"
+
+ is now +
+
+ executing "tmpl" at <.very.deep.context.value.notpresent>: map has no entry for key "notpresent"
+
+ If a user-defined function called by a template panics, the
+ panic is now caught and returned as an error by
+ the Execute
or ExecuteTemplate
method.
+
+ The time zone database in $GOROOT/lib/time/zoneinfo.zip
+ has been updated to version 2018i. Note that this ZIP file is
+ only used if a time zone database is not provided by the operating
+ system.
+
+ It is invalid to convert a nil unsafe.Pointer
to uintptr
and back with arithmetic.
+ (This was already invalid, but will now cause the compiler to misbehave.)
+
+ The latest Go release, version 1.13, arrives six months after Go 1.12. + Most of its changes are in the implementation of the toolchain, runtime, and libraries. + As always, the release maintains the Go 1 promise of compatibility. + We expect almost all Go programs to continue to compile and run as before. +
+ ++ As of Go 1.13, the go command by default downloads and authenticates + modules using the Go module mirror and Go checksum database run by Google. See + https://proxy.golang.org/privacy + for privacy information about these services and the + go command documentation + for configuration details including how to disable the use of these servers or use + different ones. If you depend on non-public modules, see the + documentation for configuring your environment. +
+ ++ Per the number literal proposal, + Go 1.13 supports a more uniform and modernized set of number literal prefixes. +
0b
or 0B
indicates a binary integer literal
+ such as 0b1011
.
+ 0o
or 0O
indicates an octal integer literal
+ such as 0o660
.
+ The existing octal notation indicated by a leading 0
followed by
+ octal digits remains valid.
+ 0x
or 0X
may now be used to express the mantissa of a
+ floating-point number in hexadecimal format such as 0x1.0p-1021
.
+ A hexadecimal floating-point number must always have an exponent, written as the letter
+ p
or P
followed by an exponent in decimal. The exponent scales
+ the mantissa by 2 to the power of the exponent.
+ i
may now be used with any (binary, decimal, hexadecimal)
+ integer or floating-point literal.
+ 1_000_000
, 0b_1010_0110
, or 3.1415_9265
.
+ An underscore may appear between any two digits or the literal prefix and the first digit.
+
+ Per the signed shift counts proposal
+ Go 1.13 removes the restriction that a shift count
+ must be unsigned. This change eliminates the need for many artificial uint
conversions,
+ solely introduced to satisfy this (now removed) restriction of the <<
and >>
operators.
+
+ These language changes were implemented by changes to the compiler, and corresponding internal changes to the library
+ packages go/scanner
and
+ text/scanner
(number literals),
+ and go/types
(signed shift counts).
+
+ If your code uses modules and your go.mod
files specifies a language version, be sure
+ it is set to at least 1.13
to get access to these language changes.
+ You can do this by editing the go.mod
file directly, or you can run
+ go mod edit -go=1.13
.
+
+ Go 1.13 is the last release that will run on Native Client (NaCl). +
+ +
+ For GOARCH=wasm
, the new environment variable GOWASM
takes a comma-separated list of experimental features that the binary gets compiled with.
+ The valid values are documented here.
+
+ AIX on PPC64 (aix/ppc64
) now supports cgo, external
+ linking, and the c-archive
and pie
build
+ modes.
+
+ Go programs are now compatible with Android 10. +
+ ++ As announced in the Go 1.12 release notes, + Go 1.13 now requires macOS 10.11 El Capitan or later; + support for previous versions has been discontinued. +
+ +
+ As announced in the Go 1.12 release notes,
+ Go 1.13 now requires FreeBSD 11.2 or later;
+ support for previous versions has been discontinued.
+ FreeBSD 12.0 or later requires a kernel with the COMPAT_FREEBSD11
+ option set (this is the default).
+
+ Go now supports Illumos with GOOS=illumos
.
+ The illumos
build tag implies the solaris
+ build tag.
+
+ The Windows version specified by internally-linked Windows binaries + is now Windows 7 rather than NT 4.0. This was already the minimum + required version for Go, but can affect the behavior of system calls + that have a backwards-compatibility mode. These will now behave as + documented. Externally-linked binaries (any program using cgo) have + always specified a more recent Windows version. +
+ +
+ The GO111MODULE
+ environment variable continues to default to auto
, but
+ the auto
setting now activates the module-aware mode of
+ the go
command whenever the current working directory contains,
+ or is below a directory containing, a go.mod
file — even if the
+ current directory is within GOPATH/src
. This change simplifies
+ the migration of existing code within GOPATH/src
and the ongoing
+ maintenance of module-aware packages alongside non-module-aware importers.
+
+ The new
+ GOPRIVATE
+ environment variable indicates module paths that are not publicly available.
+ It serves as the default value for the lower-level GONOPROXY
+ and GONOSUMDB
variables, which provide finer-grained control over
+ which modules are fetched via proxy and verified using the checksum database.
+
+ The GOPROXY
+ environment variable may now be set to a comma-separated list of proxy
+ URLs or the special token direct
, and
+ its default value is
+ now https://proxy.golang.org,direct
. When resolving a package
+ path to its containing module, the go
command will try all
+ candidate module paths on each proxy in the list in succession. An unreachable
+ proxy or HTTP status code other than 404 or 410 terminates the search without
+ consulting the remaining proxies.
+
+ The new
+ GOSUMDB
+ environment variable identifies the name, and optionally the public key and
+ server URL, of the database to consult for checksums of modules that are not
+ yet listed in the main module's go.sum
file.
+ If GOSUMDB
does not include an explicit URL, the URL is chosen by
+ probing the GOPROXY
URLs for an endpoint indicating support for
+ the checksum database, falling back to a direct connection to the named
+ database if it is not supported by any proxy. If GOSUMDB
is set
+ to off
, the checksum database is not consulted and only the
+ existing checksums in the go.sum
file are verified.
+
+ Users who cannot reach the default proxy and checksum database (for example,
+ due to a firewalled or sandboxed configuration) may disable their use by
+ setting GOPROXY
to direct
, and/or
+ GOSUMDB
to off
.
+ go
env
-w
+ can be used to set the default values for these variables independent of
+ platform:
+
+go env -w GOPROXY=direct +go env -w GOSUMDB=off ++ +
go
get
+ In module-aware mode,
+ go
get
+ with the -u
flag now updates a smaller set of modules that is
+ more consistent with the set of packages updated by
+ go
get
-u
in GOPATH mode.
+ go
get
-u
continues to update the
+ modules and packages named on the command line, but additionally updates only
+ the modules containing the packages imported by the named packages,
+ rather than the transitive module requirements of the modules containing the
+ named packages.
+
+ Note in particular that go
get
-u
+ (without additional arguments) now updates only the transitive imports of the
+ package in the current directory. To instead update all of the packages
+ transitively imported by the main module (including test dependencies), use
+ go
get
-u
all
.
+
+ As a result of the above changes to
+ go
get
-u
, the
+ go
get
subcommand no longer supports
+ the -m
flag, which caused go
get
to
+ stop before loading packages. The -d
flag remains supported, and
+ continues to cause go
get
to stop after downloading
+ the source code needed to build dependencies of the named packages.
+
+ By default, go
get
-u
in module mode
+ upgrades only non-test dependencies, as in GOPATH mode. It now also accepts
+ the -t
flag, which (as in GOPATH mode)
+ causes go
get
to include the packages imported
+ by tests of the packages named on the command line.
+
+ In module-aware mode, the go
get
subcommand now
+ supports the version suffix @patch
. The @patch
+ suffix indicates that the named module, or module containing the named
+ package, should be updated to the highest patch release with the same
+ major and minor versions as the version found in the build list.
+
+ If a module passed as an argument to go
get
+ without a version suffix is already required at a newer version than the
+ latest released version, it will remain at the newer version. This is
+ consistent with the behavior of the -u
flag for module
+ dependencies. This prevents unexpected downgrades from pre-release versions.
+ The new version suffix @upgrade
explicitly requests this
+ behavior. @latest
explicitly requests the latest version
+ regardless of the current version.
+
+ When extracting a module from a version control system, the go
+ command now performs additional validation on the requested version string.
+
+ The +incompatible
version annotation bypasses the requirement
+ of semantic
+ import versioning for repositories that predate the introduction of
+ modules. The go
command now verifies that such a version does not
+ include an explicit go.mod
file.
+
+ The go
command now verifies the mapping
+ between pseudo-versions and
+ version-control metadata. Specifically:
+
vX.0.0
, or derived
+ from a tag on an ancestor of the named revision, or derived from a tag that
+ includes build metadata on
+ the named revision itself.go
command would generate. (For SHA-1 hashes as used
+ by git
, a 12-digit prefix.)
+ If a require
directive in the
+ main module uses
+ an invalid pseudo-version, it can usually be corrected by redacting the
+ version to just the commit hash and re-running a go
command, such
+ as go
list
-m
all
+ or go
mod
tidy
. For example,
+
require github.com/docker/docker v1.14.0-0.20190319215453-e7b5f7dbe98c+
can be redacted to
+require github.com/docker/docker e7b5f7dbe98c+
which currently resolves to
+require github.com/docker/docker v0.7.3-0.20190319215453-e7b5f7dbe98c+ +
+ If one of the transitive dependencies of the main module requires an invalid
+ version or pseudo-version, the invalid version can be replaced with a valid
+ one using a
+ replace
directive in
+ the go.mod
file of the main module. If the replacement is a
+ commit hash, it will be resolved to the appropriate pseudo-version as above.
+ For example,
+
replace github.com/docker/docker v1.14.0-0.20190319215453-e7b5f7dbe98c => github.com/docker/docker e7b5f7dbe98c+
currently resolves to
+replace github.com/docker/docker v1.14.0-0.20190319215453-e7b5f7dbe98c => github.com/docker/docker v0.7.3-0.20190319215453-e7b5f7dbe98c+ +
+ The go
env
+ command now accepts a -w
flag to set the per-user default value
+ of an environment variable recognized by the
+ go
command, and a corresponding -u
flag to unset a
+ previously-set default. Defaults set via
+ go
env
-w
are stored in the
+ go/env
file within
+ os.UserConfigDir()
.
+
+ The
+ go
version
command now accepts arguments naming
+ executables and directories. When invoked on an executable,
+ go
version
prints the version of Go used to build
+ the executable. If the -m
flag is used,
+ go
version
prints the executable's embedded module
+ version information, if available. When invoked on a directory,
+ go
version
prints information about executables
+ contained in the directory and its subdirectories.
+
+ The new go
+ build
flag -trimpath
removes all file system paths
+ from the compiled executable, to improve build reproducibility.
+
+ If the -o
flag passed to go
build
+ refers to an existing directory, go
build
will now
+ write executable files within that directory for main
packages
+ matching its package arguments.
+
+ go
+ generate
now sets the generate
build tag so that
+ files may be searched for directives but ignored during build.
+
+ As announced in the Go 1.12 release
+ notes, binary-only packages are no longer supported. Building a binary-only
+ package (marked with a //go:binary-only-package
comment) now
+ results in an error.
+
+ The compiler has a new implementation of escape analysis that is
+ more precise. For most Go code should be an improvement (in other
+ words, more Go variables and expressions allocated on the stack
+ instead of heap). However, this increased precision may also break
+ invalid code that happened to work before (for example, code that
+ violates
+ the unsafe.Pointer
+ safety rules). If you notice any regressions that appear
+ related, the old escape analysis pass can be re-enabled
+ with go
build
-gcflags=all=-newescape=false
.
+ The option to use the old escape analysis will be removed in a
+ future release.
+
+ The compiler no longer emits floating point or complex constants
+ to go_asm.h
files. These have always been emitted in a
+ form that could not be used as numeric constant in assembly code.
+
+ The assembler now supports many of the atomic instructions + introduced in ARM v8.1. +
+ +
+ gofmt
(and with that go fmt
) now canonicalizes
+ number literal prefixes and exponents to use lower-case letters, but
+ leaves hexadecimal digits alone. This improves readability when using the new octal prefix
+ (0O
becomes 0o
), and the rewrite is applied consistently.
+ gofmt
now also removes unnecessary leading zeroes from a decimal integer
+ imaginary literal. (For backwards-compatibility, an integer imaginary literal
+ starting with 0
is considered a decimal, not an octal number.
+ Removing superfluous leading zeroes avoids potential confusion.)
+ For instance, 0B1010
, 0XabcDEF
, 0O660
,
+ 1.2E3
, and 01i
become 0b1010
, 0xabcDEF
,
+ 0o660
, 1.2e3
, and 1i
after applying gofmt
.
+
godoc
and go
doc
+ The godoc
webserver is no longer included in the main binary distribution.
+ To run the godoc
webserver locally, manually install it first:
+
+go get golang.org/x/tools/cmd/godoc +godoc ++ + +
+ The
+ go
doc
+ command now always includes the package clause in its output, except for
+ commands. This replaces the previous behavior where a heuristic was used,
+ causing the package clause to be omitted under certain conditions.
+
+ Out of range panic messages now include the index that was out of
+ bounds and the length (or capacity) of the slice. For
+ example, s[3]
on a slice of length 1 will panic with
+ "runtime error: index out of range [3] with length 1".
+
+ This release improves performance of most uses of defer
+ by 30%.
+
+ The runtime is now more aggressive at returning memory to the + operating system to make it available to co-tenant applications. + Previously, the runtime could retain memory for five or more minutes + following a spike in the heap size. It will now begin returning it + promptly after the heap shrinks. However, on many OSes, including + Linux, the OS itself reclaims memory lazily, so process RSS will not + decrease until the system is under memory pressure. +
+ +
+ As announced in Go 1.12, Go 1.13 enables support for TLS 1.3 in the
+ crypto/tls
package by default. It can be disabled by adding the
+ value tls13=0
to the GODEBUG
+ environment variable. The opt-out will be removed in Go 1.14.
+
+ See the Go 1.12 release notes for important + compatibility information. +
+ +
+ The new crypto/ed25519
+ package implements the Ed25519 signature
+ scheme. This functionality was previously provided by the
+ golang.org/x/crypto/ed25519
+ package, which becomes a wrapper for
+ crypto/ed25519
when used with Go 1.13+.
+
+ Go 1.13 contains support for error wrapping, as first proposed in + the + Error Values proposal and discussed on the + associated issue. +
+
+ An error e
can wrap another error w
by providing
+ an Unwrap
method that returns w
. Both e
+ and w
are available to programs, allowing e
to provide
+ additional context to w
or to reinterpret it while still allowing
+ programs to make decisions based on w
.
+
+ To support wrapping, fmt.Errorf
now has a %w
+ verb for creating wrapped errors, and three new functions in
+ the errors
package (
+ errors.Unwrap
,
+ errors.Is
and
+ errors.As
) simplify unwrapping
+ and inspecting wrapped errors.
+
+ For more information, read the errors
package
+ documentation, or see
+ the Error Value FAQ.
+ There will soon be a blog post as well.
+
+ As always, there are various minor changes and updates to the library, + made with the Go 1 promise of compatibility + in mind. +
+ +
+ The new ToValidUTF8
function returns a
+ copy of a given byte slice with each run of invalid UTF-8 byte sequences replaced by a given slice.
+
+ The formatting of contexts returned by WithValue
no longer depends on fmt
and will not stringify in the same way. Code that depends on the exact previous stringification might be affected.
+
+ Support for SSL version 3.0 (SSLv3) + is now deprecated and will be removed in Go 1.14. Note that SSLv3 is the + cryptographically broken + protocol predating TLS. +
+ ++ SSLv3 was always disabled by default, other than in Go 1.12, when it was + mistakenly enabled by default server-side. It is now again disabled by + default. (SSLv3 was never supported client-side.) +
+ ++ Ed25519 certificates are now supported in TLS versions 1.2 and 1.3. +
+ +
+ Ed25519 keys are now supported in certificates and certificate requests
+ according to RFC 8410, as well as by the
+ ParsePKCS8PrivateKey
,
+ MarshalPKCS8PrivateKey
,
+ and ParsePKIXPublicKey
functions.
+
+ The paths searched for system roots now include /etc/ssl/cert.pem
+ to support the default location in Alpine Linux 3.7+.
+
+ The new NullTime
type represents a time.Time
that may be null.
+
+ The new NullInt32
type represents an int32
that may be null.
+
+ The Data.Type
+ method no longer panics if it encounters an unknown DWARF tag in
+ the type graph. Instead, it represents that component of the
+ type with
+ an UnsupportedType
+ object.
+
+ The new function As
finds the first
+ error in a given error’s chain (sequence of wrapped errors)
+ that matches a given target’s type, and if so, sets the target to that error value.
+
+ The new function Is
reports whether a given error value matches an
+ error in another’s chain.
+
+ The new function Unwrap
returns the result of calling
+ Unwrap
on a given error, if one exists.
+
+ The printing verbs %x
and %X
now format floating-point and
+ complex numbers in hexadecimal notation, in lower-case and upper-case respectively.
+
+ The new printing verb %O
formats integers in base 8, emitting the 0o
prefix.
+
+ The scanner now accepts hexadecimal floating-point values, digit-separating underscores
+ and leading 0b
and 0o
prefixes.
+ See the Changes to the language for details.
+
The Errorf
function
+ has a new verb, %w
, whose operand must be an error.
+ The error returned from Errorf
will have an
+ Unwrap
method which returns the operand of %w
.
+
+ The scanner has been updated to recognize the new Go number literals, specifically
+ binary literals with 0b
/0B
prefix, octal literals with 0o
/0O
prefix,
+ and floating-point numbers with hexadecimal mantissa. The imaginary suffix i
may now be used with any number
+ literal, and underscores may used as digit separators for grouping.
+ See the Changes to the language for details.
+
+ The type-checker has been updated to follow the new rules for integer shifts. + See the Changes to the language for details. +
+ +
+ When using a <script>
tag with "module" set as the
+ type attribute, code will now be interpreted as JavaScript module script.
+
+ The new Writer
function returns the output destination for the standard logger.
+
+ The new Rat.SetUint64
method sets the Rat
to a uint64
value.
+
+ For Float.Parse
, if base is 0, underscores
+ may be used between digits for readability.
+ See the Changes to the language for details.
+
+ For Int.SetString
, if base is 0, underscores
+ may be used between digits for readability.
+ See the Changes to the language for details.
+
+ Rat.SetString
now accepts non-decimal floating point representations.
+
+ The execution time of Add
,
+ Sub
,
+ Mul
,
+ RotateLeft
, and
+ ReverseBytes
is now
+ guaranteed to be independent of the inputs.
+
+ On Unix systems where use-vc
is set in resolv.conf
, TCP is used for DNS resolution.
+
+ The new field ListenConfig.KeepAlive
+ specifies the keep-alive period for network connections accepted by the listener.
+ If this field is 0 (the default) TCP keep-alives will be enabled.
+ To disable them, set it to a negative value.
+
+ Note that the error returned from I/O on a connection that was
+ closed by a keep-alive timeout will have a
+ Timeout
method that returns true
if called.
+ This can make a keep-alive error difficult to distinguish from
+ an error returned due to a missed deadline as set by the
+ SetDeadline
+ method and similar methods.
+ Code that uses deadlines and checks for them with
+ the Timeout
method or
+ with os.IsTimeout
+ may want to disable keep-alives, or
+ use errors.Is(syscall.ETIMEDOUT)
(on Unix systems)
+ which will return true for a keep-alive timeout and false for a
+ deadline timeout.
+
+ The new fields Transport.WriteBufferSize
+ and Transport.ReadBufferSize
+ allow one to specify the sizes of the write and read buffers for a Transport
.
+ If either field is zero, a default size of 4KB is used.
+
+ The new field Transport.ForceAttemptHTTP2
+ controls whether HTTP/2 is enabled when a non-zero Dial
, DialTLS
, or DialContext
+ func or TLSClientConfig
is provided.
+
+ Transport.MaxConnsPerHost
now works
+ properly with HTTP/2.
+
+ TimeoutHandler
's
+ ResponseWriter
now implements the
+ Pusher
interface.
+
+ The StatusCode
103
"Early Hints"
has been added.
+
+ Transport
now uses the Request.Body
's
+ io.ReaderFrom
implementation if available, to optimize writing the body.
+
+ On encountering unsupported transfer-encodings, http.Server
now
+ returns a "501 Unimplemented" status as mandated by the HTTP specification RFC 7230 Section 3.3.1.
+
+ The new Server
fields
+ BaseContext
and
+ ConnContext
+ allow finer control over the Context
values provided to requests and connections.
+
+ http.DetectContentType
now correctly detects RAR signatures, and can now also detect RAR v5 signatures.
+
+ The new Header
method
+ Clone
returns a copy of the receiver.
+
+ A new function NewRequestWithContext
has been added and it
+ accepts a Context
that controls the entire lifetime of
+ the created outgoing Request
, suitable for use with
+ Client.Do
and Transport.RoundTrip
.
+
+ The Transport
no longer logs errors when servers
+ gracefully shut down idle connections using a "408 Request Timeout"
response.
+
+ The new UserConfigDir
function
+ returns the default directory to use for user-specific configuration data.
+
+ If a File
is opened using the O_APPEND flag, its
+ WriteAt
method will always return an error.
+
+ On Windows, the environment for a Cmd
always inherits the
+ %SYSTEMROOT%
value of the parent process unless the
+ Cmd.Env
field includes an explicit value for it.
+
+ The new Value.IsZero
method reports whether a Value
is the zero value for its type.
+
+ The MakeFunc
function now allows assignment conversions on returned values, instead of requiring exact type match. This is particularly useful when the type being returned is an interface type, but the value actually returned is a concrete value implementing that type.
+
+ Tracebacks, runtime.Caller
,
+ and runtime.Callers
now refer to the function that
+ initializes the global variables of PKG
+ as PKG.init
instead of PKG.init.ializers
.
+
+ For strconv.ParseFloat
,
+ strconv.ParseInt
+ and strconv.ParseUint
,
+ if base is 0, underscores may be used between digits for readability.
+ See the Changes to the language for details.
+
+ The new ToValidUTF8
function returns a
+ copy of a given string with each run of invalid UTF-8 byte sequences replaced by a given string.
+
+ The fast paths of Mutex.Lock
, Mutex.Unlock
,
+ RWMutex.Lock
, RWMutex.RUnlock
, and
+ Once.Do
are now inlined in their callers.
+ For the uncontended cases on amd64, these changes make Once.Do
twice as fast, and the
+ Mutex
/RWMutex
methods up to 10% faster.
+
+ Large Pool
no longer increase stop-the-world pause times.
+
+ Pool
no longer needs to be completely repopulated after every GC. It now retains some objects across GCs,
+ as opposed to releasing all objects, reducing load spikes for heavy users of Pool
.
+
+ Uses of _getdirentries64
have been removed from
+ Darwin builds, to allow Go binaries to be uploaded to the macOS
+ App Store.
+
+ The new ProcessAttributes
and ThreadAttributes
fields in
+ SysProcAttr
have been introduced for Windows,
+ exposing security settings when creating new processes.
+
+ EINVAL
is no longer returned in zero
+ Chmod
mode on Windows.
+
+ Values of type Errno
can be tested against error values in
+ the os
package,
+ like ErrExist
, using
+ errors.Is
.
+
+ TypedArrayOf
has been replaced by
+ CopyBytesToGo
and
+ CopyBytesToJS
for copying bytes
+ between a byte slice and a Uint8Array
.
+
+ When running benchmarks, B.N
is no longer rounded.
+
+ The new method B.ReportMetric
lets users report
+ custom benchmark metrics and override built-in metrics.
+
+ Testing flags are now registered in the new Init
function,
+ which is invoked by the generated main
function for the test.
+ As a result, testing flags are now only registered when running a test binary,
+ and packages that call flag.Parse
during package initialization may cause tests to fail.
+
+ The scanner has been updated to recognize the new Go number literals, specifically
+ binary literals with 0b
/0B
prefix, octal literals with 0o
/0O
prefix,
+ and floating-point numbers with hexadecimal mantissa.
+ Also, the new AllowDigitSeparators
+ mode allows number literals to contain underscores as digit separators (off by default for backwards-compatibility).
+ See the Changes to the language for details.
+
+ The new slice function + returns the result of slicing its first argument by the following arguments. +
+ +
+ Day-of-year is now supported by Format
+ and Parse
.
+
+ The new Duration
methods
+ Microseconds
and
+ Milliseconds
return
+ the duration as an integer count of their respectively named units.
+
+ The unicode
package and associated
+ support throughout the system has been upgraded from Unicode 10.0 to
+ Unicode 11.0,
+ which adds 684 new characters, including seven new scripts, and 66 new emoji.
+
+ The latest Go release, version 1.14, arrives six months after Go 1.13. + Most of its changes are in the implementation of the toolchain, runtime, and libraries. + As always, the release maintains the Go 1 promise of compatibility. + We expect almost all Go programs to continue to compile and run as before. +
+ +
+ Module support in the go
command is now ready for production use,
+ and we encourage all users to migrate to Go
+ modules for dependency management. If you are unable to migrate due to a problem in the Go
+ toolchain, please ensure that the problem has an
+ open issue
+ filed. (If the issue is not on the Go1.15
milestone, please let us
+ know why it prevents you from migrating so that we can prioritize it
+ appropriately.)
+
+ Per the overlapping interfaces proposal, + Go 1.14 now permits embedding of interfaces with overlapping method sets: + methods from an embedded interface may have the same names and identical signatures + as methods already present in the (embedding) interface. This solves problems that typically + (but not exclusively) occur with diamond-shaped embedding graphs. + Explicitly declared methods in an interface must remain + unique, as before. +
+ ++ Go 1.14 is the last release that will run on macOS 10.11 El Capitan. + Go 1.15 will require macOS 10.12 Sierra or later. +
+ +
+ Go 1.14 is the last Go release to support 32-bit binaries on
+ macOS (the darwin/386
port). They are no longer
+ supported by macOS, starting with macOS 10.15 (Catalina).
+ Go continues to support the 64-bit darwin/amd64
port.
+
+ Go 1.14 will likely be the last Go release to support 32-bit
+ binaries on iOS, iPadOS, watchOS, and tvOS
+ (the darwin/arm
port). Go continues to support the
+ 64-bit darwin/arm64
port.
+
+ Go binaries on Windows now + have DEP + (Data Execution Prevention) enabled. +
+ +
+ On Windows, creating a file
+ via os.OpenFile
with
+ the os.O_CREATE
flag, or
+ via syscall.Open
with
+ the syscall.O_CREAT
+ flag, will now create the file as read-only if the
+ bit 0o200
(owner write permission) is not set in the
+ permission argument. This makes the behavior on Windows more like
+ that on Unix systems.
+
+ JavaScript values referenced from Go via js.Value
+ objects can now be garbage collected.
+
+ js.Value
values can no longer be compared using
+ the ==
operator, and instead must be compared using
+ their Equal
method.
+
+ js.Value
now
+ has IsUndefined
, IsNull
,
+ and IsNaN
methods.
+
+ Go 1.14 contains experimental support for 64-bit RISC-V on Linux
+ (GOOS=linux
, GOARCH=riscv64
). Be aware
+ that performance, assembly syntax stability, and possibly
+ correctness are a work in progress.
+
+ Go now supports the 64-bit ARM architecture on FreeBSD 12.0 or later (the
+ freebsd/arm64
port).
+
+ As announced in the Go 1.13 release notes,
+ Go 1.14 drops support for the Native Client platform (GOOS=nacl
).
+
+ The runtime now respects zone CPU caps
+ (the zone.cpu-cap
resource control)
+ for runtime.NumCPU
and the default value
+ of GOMAXPROCS
.
+
+ When the main module contains a top-level vendor
directory and
+ its go.mod
file specifies go
1.14
or
+ higher, the go
command now defaults to -mod=vendor
+ for operations that accept that flag. A new value for that flag,
+ -mod=mod
, causes the go
command to instead load
+ modules from the module cache (as when no vendor
directory is
+ present).
+
+ When -mod=vendor
is set (explicitly or by default), the
+ go
command now verifies that the main module's
+ vendor/modules.txt
file is consistent with its
+ go.mod
file.
+
+ go
list
-m
no longer silently omits
+ transitive dependencies that do not provide packages in
+ the vendor
directory. It now fails explicitly if
+ -mod=vendor
is set and information is requested for a module not
+ mentioned in vendor/modules.txt
.
+
+ The go
get
command no longer accepts
+ the -mod
flag. Previously, the flag's setting either
+ was ignored or
+ caused the build to fail.
+
+ -mod=readonly
is now set by default when the go.mod
+ file is read-only and no top-level vendor
directory is present.
+
+ -modcacherw
is a new flag that instructs the go
+ command to leave newly-created directories in the module cache at their
+ default permissions rather than making them read-only.
+ The use of this flag makes it more likely that tests or other tools will
+ accidentally add files not included in the module's verified checksum.
+ However, it allows the use of rm
-rf
+ (instead of go
clean
-modcache
)
+ to remove the module cache.
+
+ -modfile=file
is a new flag that instructs the go
+ command to read (and possibly write) an alternate go.mod
file
+ instead of the one in the module root directory. A file
+ named go.mod
must still be present in order to determine the
+ module root directory, but it is not accessed. When -modfile
is
+ specified, an alternate go.sum
file is also used: its path is
+ derived from the -modfile
flag by trimming the .mod
+ extension and appending .sum
.
+
+ GOINSECURE
is a new environment variable that instructs
+ the go
command to not require an HTTPS connection, and to skip
+ certificate validation, when fetching certain modules directly from their
+ origins. Like the existing GOPRIVATE
variable, the value
+ of GOINSECURE
is a comma-separated list of glob patterns.
+
+ When module-aware mode is enabled explicitly (by setting
+ GO111MODULE=on
), most module commands have more
+ limited functionality if no go.mod
file is present. For
+ example, go
build
,
+ go
run
, and other build commands can only build
+ packages in the standard library and packages specified as .go
+ files on the command line.
+
+ Previously, the go
command would resolve each package path
+ to the latest version of a module but would not record the module path
+ or version. This resulted in slow,
+ non-reproducible builds.
+
+ go
get
continues to work as before, as do
+ go
mod
download
and
+ go
list
-m
with explicit versions.
+
+incompatible
versions
+ If the latest version of a module contains a go.mod
file,
+ go
get
will no longer upgrade to an
+ incompatible
+ major version of that module unless such a version is requested explicitly
+ or is already required.
+ go
list
also omits incompatible major versions
+ for such a module when fetching directly from version control, but may
+ include them if reported by a proxy.
+
go.mod
file maintenance
+ go
commands other than
+ go
mod
tidy
no longer
+ remove a require
directive that specifies a version of an indirect dependency
+ that is already implied by other (transitive) dependencies of the main
+ module.
+
+ go
commands other than
+ go
mod
tidy
no longer
+ edit the go.mod
file if the changes are only cosmetic.
+
+ When -mod=readonly
is set, go
commands will no
+ longer fail due to a missing go
directive or an erroneous
+ // indirect
comment.
+
+ The go
command now supports Subversion repositories in module mode.
+
+ The go
command now includes snippets of plain-text error messages
+ from module proxies and other HTTP servers.
+ An error message will only be shown if it is valid UTF-8 and consists of only
+ graphic characters and spaces.
+
+ go test -v
now streams t.Log
output as it happens,
+ rather than at the end of all tests.
+
+ This release improves the performance of most uses
+ of defer
to incur almost zero overhead compared to
+ calling the deferred function directly.
+ As a result, defer
can now be used in
+ performance-critical code without overhead concerns.
+
+ Goroutines are now asynchronously preemptible.
+ As a result, loops without function calls no longer potentially
+ deadlock the scheduler or significantly delay garbage collection.
+ This is supported on all platforms except windows/arm
,
+ darwin/arm
, js/wasm
, and
+ plan9/*
.
+
+ A consequence of the implementation of preemption is that on Unix
+ systems, including Linux and macOS systems, programs built with Go
+ 1.14 will receive more signals than programs built with earlier
+ releases.
+ This means that programs that use packages
+ like syscall
+ or golang.org/x/sys/unix
+ will see more slow system calls fail with EINTR
errors.
+ Those programs will have to handle those errors in some way, most
+ likely looping to try the system call again. For more
+ information about this
+ see man
+ 7 signal
for Linux systems or similar documentation for
+ other systems.
+
+ The page allocator is more efficient and incurs significantly less
+ lock contention at high values of GOMAXPROCS
.
+ This is most noticeable as lower latency and higher throughput for
+ large allocations being done in parallel and at a high rate.
+
+ Internal timers, used by
+ time.After
,
+ time.Tick
,
+ net.Conn.SetDeadline
,
+ and friends, are more efficient, with less lock contention and fewer
+ context switches.
+ This is a performance improvement that should not cause any user
+ visible changes.
+
+ This release adds -d=checkptr
as a compile-time option
+ for adding instrumentation to check that Go code is following
+ unsafe.Pointer
safety rules dynamically.
+ This option is enabled by default (except on Windows) with
+ the -race
or -msan
flags, and can be
+ disabled with -gcflags=all=-d=checkptr=0
.
+ Specifically, -d=checkptr
checks the following:
+
unsafe.Pointer
to *T
,
+ the resulting pointer must be aligned appropriately
+ for T
.
+ unsafe.Pointer
-typed operands must point
+ into the same object.
+
+ Using -d=checkptr
is not currently recommended on
+ Windows because it causes false alerts in the standard library.
+
+ The compiler can now emit machine-readable logs of key optimizations
+ using the -json
flag, including inlining, escape
+ analysis, bounds-check elimination, and nil-check elimination.
+
+ Detailed escape analysis diagnostics (-m=2
) now work again.
+ This had been dropped from the new escape analysis implementation in
+ the previous release.
+
+ All Go symbols in macOS binaries now begin with an underscore, + following platform conventions. +
+ ++ This release includes experimental support for compiler-inserted + coverage instrumentation for fuzzing. + See issue 14565 for more + details. + This API may change in future releases. +
+ +
+ Bounds check elimination now uses information from slice creation and can
+ eliminate checks for indexes with types smaller than int
.
+
+ Go 1.14 includes a new package,
+ hash/maphash
,
+ which provides hash functions on byte sequences.
+ These hash functions are intended to be used to implement hash tables or
+ other data structures that need to map arbitrary strings or byte
+ sequences to a uniform distribution on unsigned 64-bit integers.
+
+ The hash functions are collision-resistant but not cryptographically secure. +
++ The hash value of a given byte sequence is consistent within a + single process, but will be different in different processes. +
+ ++ As always, there are various minor changes and updates to the library, + made with the Go 1 promise of compatibility + in mind. +
+ ++ Support for SSL version 3.0 (SSLv3) has been removed. Note that SSLv3 is the + cryptographically broken + protocol predating TLS. +
+ +
+ TLS 1.3 can't be disabled via the GODEBUG
environment
+ variable anymore. Use the
+ Config.MaxVersion
+ field to configure TLS versions.
+
+ When multiple certificate chains are provided through the
+ Config.Certificates
+ field, the first one compatible with the peer is now automatically
+ selected. This allows for example providing an ECDSA and an RSA
+ certificate, and letting the package automatically select the best one.
+ Note that the performance of this selection is going to be poor unless the
+ Certificate.Leaf
+ field is set. The
+ Config.NameToCertificate
+ field, which only supports associating a single certificate with
+ a give name, is now deprecated and should be left as nil
.
+ Similarly the
+ Config.BuildNameToCertificate
+ method, which builds the NameToCertificate
field
+ from the leaf certificates, is now deprecated and should not be
+ called.
+
+ The new CipherSuites
+ and InsecureCipherSuites
+ functions return a list of currently implemented cipher suites.
+ The new CipherSuiteName
+ function returns a name for a cipher suite ID.
+
+ The new
+ (*ClientHelloInfo).SupportsCertificate
and
+
+ (*CertificateRequestInfo).SupportsCertificate
+ methods expose whether a peer supports a certain certificate.
+
+ The tls
package no longer supports the legacy Next Protocol
+ Negotiation (NPN) extension and now only supports ALPN. In previous
+ releases it supported both. There are no API changes and applications
+ should function identically as before. Most other clients and servers have
+ already removed NPN support in favor of the standardized ALPN.
+
+ RSA-PSS signatures are now used when supported in TLS 1.2 handshakes. This
+ won't affect most applications, but custom
+ Certificate.PrivateKey
+ implementations that don't support RSA-PSS signatures will need to use the new
+
+ Certificate.SupportedSignatureAlgorithms
+ field to disable them.
+
+ Config.Certificates
and
+ Config.GetCertificate
+ can now both be nil if
+ Config.GetConfigForClient
+ is set. If the callbacks return neither certificates nor an error, the
+ unrecognized_name
is now sent.
+
+ The new CertificateRequestInfo.Version
+ field provides the TLS version to client certificates callbacks.
+
+ The new TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
and
+ TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
constants use
+ the final names for the cipher suites previously referred to as
+ TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
and
+ TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
.
+
+ Certificate.CreateCRL
+ now supports Ed25519 issuers.
+
+ The debug/dwarf
package now supports reading DWARF
+ version 5.
+
+ The new
+ method (*Data).AddSection
+ supports adding arbitrary new DWARF sections from the input file
+ to the DWARF Data
.
+
+ The new
+ method (*Reader).ByteOrder
+ returns the byte order of the current compilation unit.
+ This may be used to interpret attributes that are encoded in the
+ native ordering, such as location descriptions.
+
+ The new
+ method (*LineReader).Files
+ returns the file name table from a line reader.
+ This may be used to interpret the value of DWARF attributes such
+ as AttrDeclFile
.
+
+ Unmarshal
+ now supports ASN.1 string type BMPString, represented by the new
+ TagBMPString
+ constant.
+
+ The Decoder
+ type supports a new
+ method InputOffset
+ that returns the input stream byte offset of the current
+ decoder position.
+
+ Compact
no longer
+ escapes the U+2028
and U+2029
characters, which
+ was never a documented feature. For proper escaping, see HTMLEscape
.
+
+ Number
no longer
+ accepts invalid numbers, to follow the documented behavior more closely.
+ If a program needs to accept invalid numbers like the empty string,
+ consider wrapping the type with Unmarshaler
.
+
+ Unmarshal
+ can now support map keys with string underlying type which implement
+ encoding.TextUnmarshaler
.
+
+ The Context
+ type has a new field Dir
which may be used to set
+ the working directory for the build.
+ The default is the current directory of the running process.
+ In module mode, this is used to locate the main module.
+
+ The new
+ function NewFromFiles
+ computes package documentation from a list
+ of *ast.File
's and associates examples with the
+ appropriate package elements.
+ The new information is available in a new Examples
+ field
+ in the Package
, Type
,
+ and Func
types, and a
+ new Suffix
+ field in
+ the Example
+ type.
+
+ TempDir
can now create directories
+ whose names have predictable prefixes and suffixes.
+ As with TempFile
, if the pattern
+ contains a '*', the random string replaces the last '*'.
+
+ The
+ new Lmsgprefix
+ flag may be used to tell the logging functions to emit the
+ optional output prefix immediately before the log message rather
+ than at the start of the line.
+
+ The new FMA
function
+ computes x*y+z
in floating point with no
+ intermediate rounding of the x*y
+ computation. Several architectures implement this computation
+ using dedicated hardware instructions for additional performance.
+
+ The GCD
method
+ now allows the inputs a
and b
to be
+ zero or negative.
+
+ The new functions
+ Rem
,
+ Rem32
, and
+ Rem64
+ support computing a remainder even when the quotient overflows.
+
+ The default type of .js
and .mjs
files
+ is now text/javascript
rather
+ than application/javascript
.
+ This is in accordance
+ with an
+ IETF draft that treats application/javascript
as obsolete.
+
+ The
+ new Reader
+ method NextRawPart
+ supports fetching the next MIME part without transparently
+ decoding quoted-printable
data.
+
+ The new Header
+ method Values
+ can be used to fetch all values associated with a
+ canonicalized key.
+
+ The
+ new Transport
+ field DialTLSContext
+ can be used to specify an optional dial function for creating
+ TLS connections for non-proxied HTTPS requests.
+ This new field can be used instead
+ of DialTLS
,
+ which is now considered deprecated; DialTLS
will
+ continue to work, but new code should
+ use DialTLSContext
, which allows the transport to
+ cancel dials as soon as they are no longer needed.
+
+ On Windows, ServeFile
now correctly
+ serves files larger than 2GB.
+
+ The
+ new Server
+ field EnableHTTP2
+ supports enabling HTTP/2 on the test server.
+
+ The
+ new MIMEHeader
+ method Values
+ can be used to fetch all values associated with a canonicalized
+ key.
+
+ When parsing of a URL fails
+ (for example by Parse
+ or ParseRequestURI
),
+ the resulting Error
message
+ will now quote the unparsable URL.
+ This provides clearer structure and consistency with other parsing errors.
+
+ On Windows,
+ the CTRL_CLOSE_EVENT
, CTRL_LOGOFF_EVENT
,
+ and CTRL_SHUTDOWN_EVENT
events now generate
+ a syscall.SIGTERM
signal, similar to how Control-C
+ and Control-Break generate a syscall.SIGINT
signal.
+
+ The plugin
package now supports freebsd/amd64
.
+
+ StructOf
now
+ supports creating struct types with unexported fields, by
+ setting the PkgPath
field in
+ a StructField
element.
+
+ runtime.Goexit
can no longer be aborted by a
+ recursive panic
/recover
.
+
+ On macOS, SIGPIPE
is no longer forwarded to signal
+ handlers installed before the Go runtime is initialized.
+ This is necessary because macOS delivers SIGPIPE
+ to the main thread
+ rather than the thread writing to the closed pipe.
+
+ The generated profile no longer includes the pseudo-PCs used for inline + marks. Symbol information of inlined functions is encoded in + the format + the pprof tool expects. This is a fix for the regression introduced + during recent releases. +
+
+ The NumError
+ type now has
+ an Unwrap
+ method that may be used to retrieve the reason that a conversion
+ failed.
+ This supports using NumError
values
+ with errors.Is
to see
+ if the underlying error
+ is strconv.ErrRange
+ or strconv.ErrSyntax
.
+
+ Unlocking a highly contended Mutex
now directly
+ yields the CPU to the next goroutine waiting for
+ that Mutex
. This significantly improves the
+ performance of highly contended mutexes on high CPU count
+ machines.
+
+ The testing package now supports cleanup functions, called after
+ a test or benchmark has finished, by calling
+ T.Cleanup
or
+ B.Cleanup
respectively.
+
+ The text/template package now correctly reports errors when a
+ parenthesized argument is used as a function.
+ This most commonly shows up in erroneous cases like
+ {{if (eq .F "a") or (eq .F "b")}}
.
+ This should be written as {{if or (eq .F "a") (eq .F "b")}}
.
+ The erroneous case never worked as expected, and will now be
+ reported with an error can't give argument to non-function
.
+
+ The unicode
package and associated
+ support throughout the system has been upgraded from Unicode 11.0 to
+ Unicode 12.0,
+ which adds 554 new characters, including four new scripts, and 61 new emoji.
+
+ The latest Go release, version 1.15, arrives six months after Go 1.14. + Most of its changes are in the implementation of the toolchain, runtime, and libraries. + As always, the release maintains the Go 1 promise of compatibility. + We expect almost all Go programs to continue to compile and run as before. +
+ +
+ Go 1.15 includes substantial improvements to the linker,
+ improves allocation for small objects at high core counts, and
+ deprecates X.509 CommonName.
+ GOPROXY
now supports skipping proxies that return errors and
+ a new embedded tzdata package has been added.
+
+ There are no changes to the language. +
+ ++ As announced in the Go 1.14 release + notes, Go 1.15 requires macOS 10.12 Sierra or later; support for + previous versions has been discontinued. +
+ +
+ As announced in the Go 1.14 release
+ notes, Go 1.15 drops support for 32-bit binaries on macOS, iOS,
+ iPadOS, watchOS, and tvOS (the darwin/386
+ and darwin/arm
ports). Go continues to support the
+ 64-bit darwin/amd64
and darwin/arm64
ports.
+
+ Go now generates Windows ASLR executables when -buildmode=pie
+ cmd/link flag is provided. Go command uses -buildmode=pie
+ by default on Windows.
+
+ The -race
and -msan
flags now always
+ enable -d=checkptr
, which checks uses
+ of unsafe.Pointer
. This was previously the case on all
+ OSes except Windows.
+
+ Go-built DLLs no longer cause the process to exit when it receives a + signal (such as Ctrl-C at a terminal). +
+ +
+ When linking binaries for Android, Go 1.15 explicitly selects
+ the lld
linker available in recent versions of the NDK.
+ The lld
linker avoids crashes on some devices, and is
+ planned to become the default NDK linker in a future NDK version.
+
+ Go 1.15 adds support for OpenBSD 6.7 on GOARCH=arm
+ and GOARCH=arm64
. Previous versions of Go already
+ supported OpenBSD 6.7 on GOARCH=386
+ and GOARCH=amd64
.
+
+ There has been progress in improving the stability and performance
+ of the 64-bit RISC-V port on Linux (GOOS=linux
,
+ GOARCH=riscv64
). It also now supports asynchronous
+ preemption.
+
+ Go 1.15 is the last release to support x87-only floating-point
+ hardware (GO386=387
). Future releases will require at
+ least SSE2 support on 386, raising Go's
+ minimum GOARCH=386
requirement to the Intel Pentium 4
+ (released in 2000) or AMD Opteron/Athlon 64 (released in 2003).
+
+ The GOPROXY
environment variable now supports skipping proxies
+ that return errors. Proxy URLs may now be separated with either commas
+ (,
) or pipe characters (|
). If a proxy URL is
+ followed by a comma, the go
command will only try the next proxy
+ in the list after a 404 or 410 HTTP response. If a proxy URL is followed by a
+ pipe character, the go
command will try the next proxy in the
+ list after any error. Note that the default value of GOPROXY
+ remains https://proxy.golang.org,direct
, which does not fall
+ back to direct
in case of errors.
+
go
test
+ Changing the -timeout
flag now invalidates cached test results. A
+ cached result for a test run with a long timeout will no longer count as
+ passing when go
test
is re-invoked with a short one.
+
+ Various flag parsing issues in go
test
and
+ go
vet
have been fixed. Notably, flags specified
+ in GOFLAGS
are handled more consistently, and
+ the -outputdir
flag now interprets relative paths relative to the
+ working directory of the go
command (rather than the working
+ directory of each individual test).
+
+ The location of the module cache may now be set with
+ the GOMODCACHE
environment variable. The default value of
+ GOMODCACHE
is GOPATH[0]/pkg/mod
, the location of the
+ module cache before this change.
+
+ A workaround is now available for Windows "Access is denied" errors in
+ go
commands that access the module cache, caused by external
+ programs concurrently scanning the file system (see
+ issue #36568). The workaround is
+ not enabled by default because it is not safe to use when Go versions lower
+ than 1.14.2 and 1.13.10 are running concurrently with the same module cache.
+ It can be enabled by explicitly setting the environment variable
+ GODEBUG=modcacheunzipinplace=1
.
+
+ The vet tool now warns about conversions of the
+ form string(x)
where x
has an integer type
+ other than rune
or byte
.
+ Experience with Go has shown that many conversions of this form
+ erroneously assume that string(x)
evaluates to the
+ string representation of the integer x
.
+ It actually evaluates to a string containing the UTF-8 encoding of
+ the value of x
.
+ For example, string(9786)
does not evaluate to the
+ string "9786"
; it evaluates to the
+ string "\xe2\x98\xba"
, or "☺"
.
+
+ Code that is using string(x)
correctly can be rewritten
+ to string(rune(x))
.
+ Or, in some cases, calling utf8.EncodeRune(buf, x)
with
+ a suitable byte slice buf
may be the right solution.
+ Other code should most likely use strconv.Itoa
+ or fmt.Sprint
.
+
+ This new vet check is enabled by default when
+ using go
test
.
+
+ We are considering prohibiting the conversion in a future release of Go.
+ That is, the language would change to only
+ permit string(x)
for integer x
when the
+ type of x
is rune
or byte
.
+ Such a language change would not be backward compatible.
+ We are using this vet check as a first trial step toward changing
+ the language.
+
+ The vet tool now warns about type assertions from one interface type + to another interface type when the type assertion will always fail. + This will happen if both interface types implement a method with the + same name but with a different type signature. +
+ ++ There is no reason to write a type assertion that always fails, so + any code that triggers this vet check should be rewritten. +
+ +
+ This new vet check is enabled by default when
+ using go
test
.
+
+ We are considering prohibiting impossible interface type assertions + in a future release of Go. + Such a language change would not be backward compatible. + We are using this vet check as a first trial step toward changing + the language. +
+ +
+ If panic
is invoked with a value whose type is derived from any
+ of: bool
, complex64
, complex128
, float32
, float64
,
+ int
, int8
, int16
, int32
, int64
, string
,
+ uint
, uint8
, uint16
, uint32
, uint64
, uintptr
,
+ then the value will be printed, instead of just its address.
+ Previously, this was only true for values of exactly these types.
+
+ On a Unix system, if the kill
command
+ or kill
system call is used to send
+ a SIGSEGV
, SIGBUS
,
+ or SIGFPE
signal to a Go program, and if the signal
+ is not being handled via
+ os/signal.Notify
,
+ the Go program will now reliably crash with a stack trace.
+ In earlier releases the behavior was unpredictable.
+
+ Allocation of small objects now performs much better at high core + counts, and has lower worst-case latency. +
+ ++ Converting a small integer value into an interface value no longer + causes allocation. +
+ ++ Non-blocking receives on closed channels now perform as well as + non-blocking receives on open channels. +
+ +
+ Package unsafe
's safety
+ rules allow converting an unsafe.Pointer
+ into uintptr
when calling certain
+ functions. Previously, in some cases, the compiler allowed multiple
+ chained conversions (for example, syscall.Syscall(…,
+ uintptr(uintptr(ptr)),
…)
). The compiler
+ now requires exactly one conversion. Code that used multiple
+ conversions should be updated to satisfy the safety rules.
+
+ Go 1.15 reduces typical binary sizes by around 5% compared to Go + 1.14 by eliminating certain types of GC metadata and more + aggressively eliminating unused type metadata. +
+ +
+ The toolchain now mitigates
+ Intel
+ CPU erratum SKX102 on GOARCH=amd64
by aligning
+ functions to 32 byte boundaries and padding jump instructions. While
+ this padding increases binary sizes, this is more than made up for
+ by the binary size improvements mentioned above.
+
+ Go 1.15 adds a -spectre
flag to both the
+ compiler and the assembler, to allow enabling Spectre mitigations.
+ These should almost never be needed and are provided mainly as a
+ “defense in depth” mechanism.
+ See the Spectre wiki page for details.
+
+ The compiler now rejects //go:
compiler directives that
+ have no meaning for the declaration they are applied to with a
+ "misplaced compiler directive" error. Such misapplied directives
+ were broken before, but were silently ignored by the compiler.
+
+ The compiler's -json
optimization logging now reports
+ large (>= 128 byte) copies and includes explanations of escape
+ analysis decisions.
+
+ This release includes substantial improvements to the Go linker, + which reduce linker resource usage (both time and memory) and + improve code robustness/maintainability. +
+ +
+ For a representative set of large Go programs, linking is 20% faster
+ and requires 30% less memory on average, for ELF
-based
+ OSes (Linux, FreeBSD, NetBSD, OpenBSD, Dragonfly, and Solaris)
+ running on amd64
architectures, with more modest
+ improvements for other architecture/OS combinations.
+
+ The key contributors to better linker performance are a newly + redesigned object file format, and a revamping of internal + phases to increase concurrency (for example, applying relocations to + symbols in parallel). Object files in Go 1.15 are slightly larger + than their 1.14 equivalents. +
+ ++ These changes are part of a multi-release project + to modernize the Go + linker, meaning that there will be additional linker + improvements expected in future releases. +
+ +
+ The linker now defaults to internal linking mode
+ for -buildmode=pie
on
+ linux/amd64
and linux/arm64
, so these
+ configurations no longer require a C linker. External linking
+ mode (which was the default in Go 1.14 for
+ -buildmode=pie
) can still be requested with
+ -ldflags=-linkmode=external
flag.
+
+ The objdump tool now supports
+ disassembling in GNU assembler syntax with the -gnu
+ flag.
+
+ Go 1.15 includes a new package,
+ time/tzdata
,
+ that permits embedding the timezone database into a program.
+ Importing this package (as import _ "time/tzdata"
)
+ permits the program to find timezone information even if the
+ timezone database is not available on the local system.
+ You can also embed the timezone database by building
+ with -tags timetzdata
.
+ Either approach increases the size of the program by about 800 KB.
+
+ Go 1.15 will translate the C type EGLConfig
to the
+ Go type uintptr
. This change is similar to how Go
+ 1.12 and newer treats EGLDisplay
, Darwin's CoreFoundation and
+ Java's JNI types. See the cgo
+ documentation for more information.
+
+ In Go 1.15.3 and later, cgo will not permit Go code to allocate an
+ undefined struct type (a C struct defined as just struct
+ S;
or similar) on the stack or heap.
+ Go code will only be permitted to use pointers to those types.
+ Allocating an instance of such a struct and passing a pointer, or a
+ full struct value, to C code was always unsafe and unlikely to work
+ correctly; it is now forbidden.
+ The fix is to either rewrite the Go code to use only pointers, or to
+ ensure that the Go code sees the full definition of the struct by
+ including the appropriate C header file.
+
+ The deprecated, legacy behavior of treating the CommonName
+ field on X.509 certificates as a host name when no Subject Alternative Names
+ are present is now disabled by default. It can be temporarily re-enabled by
+ adding the value x509ignoreCN=0
to the GODEBUG
+ environment variable.
+
+ Note that if the CommonName
is an invalid host name, it's always
+ ignored, regardless of GODEBUG
settings. Invalid names include
+ those with any characters other than letters, digits, hyphens and underscores,
+ and those with empty labels or trailing dots.
+
+ As always, there are various minor changes and updates to the library, + made with the Go 1 promise of compatibility + in mind. +
+ +
+ When a Scanner
is
+ used with an invalid
+ io.Reader
that
+ incorrectly returns a negative number from Read
,
+ the Scanner
will no longer panic, but will instead
+ return the new error
+ ErrBadReadCount
.
+
+ Creating a derived Context
using a nil parent is now explicitly
+ disallowed. Any attempt to do so with the
+ WithValue
,
+ WithDeadline
, or
+ WithCancel
functions
+ will cause a panic.
+
+ The PrivateKey
and PublicKey
types in the
+ crypto/rsa
,
+ crypto/ecdsa
, and
+ crypto/ed25519
packages
+ now have an Equal
method to compare keys for equivalence
+ or to make type-safe interfaces for public keys. The method signature
+ is compatible with
+ go-cmp
's
+ definition of equality.
+
+ Hash
now implements
+ fmt.Stringer
.
+
+ The new SignASN1
+ and VerifyASN1
+ functions allow generating and verifying ECDSA signatures in the standard
+ ASN.1 DER encoding.
+
+ The new MarshalCompressed
+ and UnmarshalCompressed
+ functions allow encoding and decoding NIST elliptic curve points in compressed format.
+
+ VerifyPKCS1v15
+ now rejects invalid short signatures with missing leading zeroes, according to RFC 8017.
+
+ The new
+ Dialer
+ type and its
+ DialContext
+ method permit using a context to both connect and handshake with a TLS server.
+
+ The new
+ VerifyConnection
+ callback on the Config
type
+ allows custom verification logic for every connection. It has access to the
+ ConnectionState
+ which includes peer certificates, SCTs, and stapled OCSP responses.
+
+ Auto-generated session ticket keys are now automatically rotated every 24 hours, + with a lifetime of 7 days, to limit their impact on forward secrecy. +
+ ++ Session ticket lifetimes in TLS 1.2 and earlier, where the session keys + are reused for resumed connections, are now limited to 7 days, also to + limit their impact on forward secrecy. +
+ ++ The client-side downgrade protection checks specified in RFC 8446 are now + enforced. This has the potential to cause connection errors for clients + encountering middleboxes that behave like unauthorized downgrade attacks. +
+ +
+ SignatureScheme
,
+ CurveID
, and
+ ClientAuthType
+ now implement fmt.Stringer
.
+
+ The ConnectionState
+ fields OCSPResponse
and SignedCertificateTimestamps
+ are now repopulated on client-side resumed connections.
+
+ tls.Conn
+ now returns an opaque error on permanently broken connections, wrapping
+ the temporary
+ net.Error
. To access the
+ original net.Error
, use
+ errors.As
(or
+ errors.Unwrap
) instead of a
+ type assertion.
+
+ If either the name on the certificate or the name being verified (with
+ VerifyOptions.DNSName
+ or VerifyHostname
)
+ are invalid, they will now be compared case-insensitively without further
+ processing (without honoring wildcards or stripping trailing dots).
+ Invalid names include those with any characters other than letters,
+ digits, hyphens and underscores, those with empty labels, and names on
+ certificates with trailing dots.
+
+ The new CreateRevocationList
+ function and RevocationList
type
+ allow creating RFC 5280-compliant X.509 v2 Certificate Revocation Lists.
+
+ CreateCertificate
+ now automatically generates the SubjectKeyId
if the template
+ is a CA and doesn't explicitly specify one.
+
+ CreateCertificate
+ now returns an error if the template specifies MaxPathLen
but is not a CA.
+
+ On Unix systems other than macOS, the SSL_CERT_DIR
+ environment variable can now be a colon-separated list.
+
+ On macOS, binaries are now always linked against
+ Security.framework
to extract the system trust roots,
+ regardless of whether cgo is available. The resulting behavior should be
+ more consistent with the OS verifier.
+
+ Name.String
+ now prints non-standard attributes from
+ Names
if
+ ExtraNames
is nil.
+
+ The new DB.SetConnMaxIdleTime
+ method allows removing a connection from the connection pool after
+ it has been idle for a period of time, without regard to the total
+ lifespan of the connection. The DBStats.MaxIdleTimeClosed
+ field shows the total number of connections closed due to
+ DB.SetConnMaxIdleTime
.
+
+ The new Row.Err
getter
+ allows checking for query errors without calling
+ Row.Scan
.
+
+ The new Validator
+ interface may be implemented by Conn
to allow drivers
+ to signal if a connection is valid or if it should be discarded.
+
+ The package now defines the
+ IMAGE_FILE
, IMAGE_SUBSYSTEM
,
+ and IMAGE_DLLCHARACTERISTICS
constants used by the
+ PE file format.
+
+ Marshal
now sorts the components
+ of SET OF according to X.690 DER.
+
+ Unmarshal
now rejects tags and
+ Object Identifiers which are not minimally encoded according to X.690 DER.
+
+ The package now has an internal limit to the maximum depth of + nesting when decoding. This reduces the possibility that a + deeply nested input could use large quantities of stack memory, + or even cause a "goroutine stack exceeds limit" panic. +
+
+ When the flag
package sees -h
or -help
,
+ and those flags are not defined, it now prints a usage message.
+ If the FlagSet
was created with
+ ExitOnError
,
+ FlagSet.Parse
would then
+ exit with a status of 2. In this release, the exit status for -h
+ or -help
has been changed to 0. In particular, this applies to
+ the default handling of command line flags.
+
+ The printing verbs %#g
and %#G
now preserve
+ trailing zeros for floating-point values.
+
+ The Source
and
+ Node
functions
+ now canonicalize number literal prefixes and exponents as part
+ of formatting Go source code. This matches the behavior of the
+ gofmt
command as it
+ was implemented since Go 1.13.
+
+ The package now uses Unicode escapes (\uNNNN
) in all
+ JavaScript and JSON contexts. This fixes escaping errors in
+ application/ld+json
and application/json
+ contexts.
+
+ TempDir
and
+ TempFile
+ now reject patterns that contain path separators.
+ That is, calls such as ioutil.TempFile("/tmp",
"../base*")
will no longer succeed.
+ This prevents unintended directory traversal.
+
+ The new Int.FillBytes
+ method allows serializing to fixed-size pre-allocated byte slices.
+
+ The functions in this package were updated to conform to the C99 standard + (Annex G IEC 60559-compatible complex arithmetic) with respect to handling + of special arguments such as infinity, NaN and signed zero. +
+
+ If an I/O operation exceeds a deadline set by
+ the Conn.SetDeadline
,
+ Conn.SetReadDeadline
,
+ or Conn.SetWriteDeadline
methods, it will now
+ return an error that is or wraps
+ os.ErrDeadlineExceeded
.
+ This may be used to reliably detect whether an error is due to
+ an exceeded deadline.
+ Earlier releases recommended calling the Timeout
+ method on the error, but I/O operations can return errors for
+ which Timeout
returns true
although a
+ deadline has not been exceeded.
+
+ The new Resolver.LookupIP
+ method supports IP lookups that are both network-specific and accept a context.
+
+ Parsing is now stricter as a hardening measure against request smuggling attacks:
+ non-ASCII white space is no longer trimmed like SP and HTAB, and support for the
+ "identity
" Transfer-Encoding
was dropped.
+
+ ReverseProxy
+ now supports not modifying the X-Forwarded-For
+ header when the incoming Request.Header
map entry
+ for that field is nil
.
+
+ When a Switching Protocol (like WebSocket) request handled by
+ ReverseProxy
+ is canceled, the backend connection is now correctly closed.
+
+ All profile endpoints now support a "seconds
" parameter. When present,
+ the endpoint profiles for the specified number of seconds and reports the difference.
+ The meaning of the "seconds
" parameter in the cpu
profile and
+ the trace endpoints is unchanged.
+
+ The new URL
field
+ RawFragment
and method EscapedFragment
+ provide detail about and control over the exact encoding of a particular fragment.
+ These are analogous to
+ RawPath
and EscapedPath
.
+
+ The new URL
+ method Redacted
+ returns the URL in string form with any password replaced with xxxxx
.
+
+ If an I/O operation exceeds a deadline set by
+ the File.SetDeadline
,
+ File.SetReadDeadline
,
+ or File.SetWriteDeadline
+ methods, it will now return an error that is or wraps
+ os.ErrDeadlineExceeded
.
+ This may be used to reliably detect whether an error is due to
+ an exceeded deadline.
+ Earlier releases recommended calling the Timeout
+ method on the error, but I/O operations can return errors for
+ which Timeout
returns true
although a
+ deadline has not been exceeded.
+
+ Packages os
and net
now automatically
+ retry system calls that fail with EINTR
. Previously
+ this led to spurious failures, which became more common in Go
+ 1.14 with the addition of asynchronous preemption. Now this is
+ handled transparently.
+
+ The os.File
type now
+ supports a ReadFrom
+ method. This permits the use of the copy_file_range
+ system call on some systems when using
+ io.Copy
to copy data
+ from one os.File
to another. A consequence is that
+ io.CopyBuffer
+ will not always use the provided buffer when copying to a
+ os.File
. If a program wants to force the use of
+ the provided buffer, it can be done by writing
+ io.CopyBuffer(struct{ io.Writer }{dst}, src, buf)
.
+
+ DWARF generation is now supported (and enabled by default) for -buildmode=plugin
on macOS.
+
+ Building with -buildmode=plugin
is now supported on freebsd/amd64
.
+
+ Package reflect
now disallows accessing methods of all
+ non-exported fields, whereas previously it allowed accessing
+ those of non-exported, embedded fields. Code that relies on the
+ previous behavior should be updated to instead access the
+ corresponding promoted method of the enclosing variable.
+
+ The new Regexp.SubexpIndex
+ method returns the index of the first subexpression with the given name
+ within the regular expression.
+
+ Several functions, including
+ ReadMemStats
+ and
+ GoroutineProfile
,
+ no longer block if a garbage collection is in progress.
+
+ The goroutine profile now includes the profile labels associated with each
+ goroutine at the time of profiling. This feature is not yet implemented for
+ the profile reported with debug=2
.
+
+ FormatComplex
and ParseComplex
are added for working with complex numbers.
+
+ FormatComplex
converts a complex number into a string of the form (a+bi), where a and b are the real and imaginary parts.
+
+ ParseComplex
converts a string into a complex number of a specified precision. ParseComplex
accepts complex numbers in the format N+Ni
.
+
+ The new method
+ Map.LoadAndDelete
+ atomically deletes a key and returns the previous value if present.
+
+ The method
+ Map.Delete
+ is more efficient.
+
+ On Unix systems, functions that use
+ SysProcAttr
+ will now reject attempts to set both the Setctty
+ and Foreground
fields, as they both use
+ the Ctty
field but do so in incompatible ways.
+ We expect that few existing programs set both fields.
+
+ Setting the Setctty
field now requires that the
+ Ctty
field be set to a file descriptor number in the
+ child process, as determined by the ProcAttr.Files
field.
+ Using a child descriptor always worked, but there were certain
+ cases where using a parent file descriptor also happened to work.
+ Some programs that set Setctty
will need to change
+ the value of Ctty
to use a child descriptor number.
+
+ It is now possible to call
+ system calls that return floating point values
+ on windows/amd64
.
+
+ The testing.T
type now has a
+ Deadline
method
+ that reports the time at which the test binary will have exceeded its
+ timeout.
+
+ A TestMain
function is no longer required to call
+ os.Exit
. If a TestMain
function returns,
+ the test binary will call os.Exit
with the value returned
+ by m.Run
.
+
+ The new methods
+ T.TempDir
and
+ B.TempDir
+ return temporary directories that are automatically cleaned up
+ at the end of the test.
+
+ go
test
-v
now groups output by
+ test name, rather than printing the test name on each line.
+
+ JSEscape
now
+ consistently uses Unicode escapes (\u00XX
), which are
+ compatible with JSON.
+
+ The new method
+ Ticker.Reset
+ supports changing the duration of a ticker.
+
+ When returning an error, ParseDuration
now quotes the original value.
+
+Since the release of Go version 1.1 in April, 2013, +the release schedule has been shortened to make the release process more efficient. +This release, Go version 1.2 or Go 1.2 for short, arrives roughly six months after 1.1, +while 1.1 took over a year to appear after 1.0. +Because of the shorter time scale, 1.2 is a smaller delta than the step from 1.0 to 1.1, +but it still has some significant developments, including +a better scheduler and one new language feature. +Of course, Go 1.2 keeps the promise +of compatibility. +The overwhelming majority of programs built with Go 1.1 (or 1.0 for that matter) +will run without any changes whatsoever when moved to 1.2, +although the introduction of one restriction +to a corner of the language may expose already-incorrect code +(see the discussion of the use of nil). +
+ ++In the interest of firming up the specification, one corner case has been clarified, +with consequences for programs. +There is also one new language feature. +
+ ++The language now specifies that, for safety reasons, +certain uses of nil pointers are guaranteed to trigger a run-time panic. +For instance, in Go 1.0, given code like +
+ ++type T struct { + X [1<<24]byte + Field int32 +} + +func main() { + var x *T + ... +} ++ +
+the nil
pointer x
could be used to access memory incorrectly:
+the expression x.Field
could access memory at address 1<<24
.
+To prevent such unsafe behavior, in Go 1.2 the compilers now guarantee that any indirection through
+a nil pointer, such as illustrated here but also in nil pointers to arrays, nil interface values,
+nil slices, and so on, will either panic or return a correct, safe non-nil value.
+In short, any expression that explicitly or implicitly requires evaluation of a nil address is an error.
+The implementation may inject extra tests into the compiled program to enforce this behavior.
+
+Further details are in the +design document. +
+ ++Updating: +Most code that depended on the old behavior is erroneous and will fail when run. +Such programs will need to be updated by hand. +
+ ++Go 1.2 adds the ability to specify the capacity as well as the length when using a slicing operation +on an existing array or slice. +A slicing operation creates a new slice by describing a contiguous section of an already-created array or slice: +
+ ++var array [10]int +slice := array[2:4] ++ +
+The capacity of the slice is the maximum number of elements that the slice may hold, even after reslicing;
+it reflects the size of the underlying array.
+In this example, the capacity of the slice
variable is 8.
+
+Go 1.2 adds new syntax to allow a slicing operation to specify the capacity as well as the length. +A second +colon introduces the capacity value, which must be less than or equal to the capacity of the +source slice or array, adjusted for the origin. For instance, +
+ ++slice = array[2:4:7] ++ +
+sets the slice to have the same length as in the earlier example but its capacity is now only 5 elements (7-2). +It is impossible to use this new slice value to access the last three elements of the original array. +
+ +
+In this three-index notation, a missing first index ([:i:j]
) defaults to zero but the other
+two indices must always be specified explicitly.
+It is possible that future releases of Go may introduce default values for these indices.
+
+Further details are in the +design document. +
+ ++Updating: +This is a backwards-compatible change that affects no existing programs. +
+ ++In prior releases, a goroutine that was looping forever could starve out other +goroutines on the same thread, a serious problem when GOMAXPROCS +provided only one user thread. +In Go 1.2, this is partially addressed: The scheduler is invoked occasionally +upon entry to a function. +This means that any loop that includes a (non-inlined) function call can +be pre-empted, allowing other goroutines to run on the same thread. +
+ ++Go 1.2 introduces a configurable limit (default 10,000) to the total number of threads +a single program may have in its address space, to avoid resource starvation +issues in some environments. +Note that goroutines are multiplexed onto threads so this limit does not directly +limit the number of goroutines, only the number that may be simultaneously blocked +in a system call. +In practice, the limit is hard to reach. +
+ +
+The new SetMaxThreads
function in the
+runtime/debug
package controls the thread count limit.
+
+Updating:
+Few functions will be affected by the limit, but if a program dies because it hits the
+limit, it could be modified to call SetMaxThreads
to set a higher count.
+Even better would be to refactor the program to need fewer threads, reducing consumption
+of kernel resources.
+
+In Go 1.2, the minimum size of the stack when a goroutine is created has been lifted from 4KB to 8KB. +Many programs were suffering performance problems with the old size, which had a tendency +to introduce expensive stack-segment switching in performance-critical sections. +The new number was determined by empirical testing. +
+ +
+At the other end, the new function SetMaxStack
+in the runtime/debug
package controls
+the maximum size of a single goroutine's stack.
+The default is 1GB on 64-bit systems and 250MB on 32-bit systems.
+Before Go 1.2, it was too easy for a runaway recursion to consume all the memory on a machine.
+
+Updating: +The increased minimum stack size may cause programs with many goroutines to use +more memory. There is no workaround, but plans for future releases +include new stack management technology that should address the problem better. +
+ +
+The cgo
command will now invoke the C++
+compiler to build any pieces of the linked-to library that are written in C++;
+the documentation has more detail.
+
+Both binaries are still included with the distribution, but the source code for the +godoc and vet commands has moved to the +go.tools subrepository. +
+ ++Also, the core of the godoc program has been split into a +library, +while the command itself is in a separate +directory. +The move allows the code to be updated easily and the separation into a library and command +makes it easier to construct custom binaries for local sites and different deployment methods. +
+ ++Updating: +Since godoc and vet are not part of the library, +no client Go code depends on the their source and no updating is required. +
+ ++The binary distributions available from golang.org +include these binaries, so users of these distributions are unaffected. +
+ +
+When building from source, users must use "go get" to install godoc and vet.
+(The binaries will continue to be installed in their usual locations, not
+$GOPATH/bin
.)
+
+$ go get code.google.com/p/go.tools/cmd/godoc +$ go get code.google.com/p/go.tools/cmd/vet ++ +
+We expect the future GCC 4.9 release to include gccgo with full +support for Go 1.2. +In the current (4.8.2) release of GCC, gccgo implements Go 1.1.2. +
+ ++Go 1.2 has several semantic changes to the workings of the gc compiler suite. +Most users will be unaffected by them. +
+ +
+The cgo
command now
+works when C++ is included in the library being linked against.
+See the cgo
documentation
+for details.
+
+The gc compiler displayed a vestigial detail of its origins when
+a program had no package
clause: it assumed
+the file was in package main
.
+The past has been erased, and a missing package
clause
+is now an error.
+
+On the ARM, the toolchain supports "external linking", which +is a step towards being able to build shared libraries with the gc +toolchain and to provide dynamic linking support for environments +in which that is necessary. +
+ +
+In the runtime for the ARM, with 5a
, it used to be possible to refer
+to the runtime-internal m
(machine) and g
+(goroutine) variables using R9
and R10
directly.
+It is now necessary to refer to them by their proper names.
+
+Also on the ARM, the 5l
linker (sic) now defines the
+MOVBS
and MOVHS
instructions
+as synonyms of MOVB
and MOVH
,
+to make clearer the separation between signed and unsigned
+sub-word moves; the unsigned versions already existed with a
+U
suffix.
+
+One major new feature of go test
is
+that it can now compute and, with help from a new, separately installed
+"go tool cover" program, display test coverage results.
+
+The cover tool is part of the
+go.tools
+subrepository.
+It can be installed by running
+
+$ go get code.google.com/p/go.tools/cmd/cover ++ +
+The cover tool does two things.
+First, when "go test" is given the -cover
flag, it is run automatically
+to rewrite the source for the package and insert instrumentation statements.
+The test is then compiled and run as usual, and basic coverage statistics are reported:
+
+$ go test -cover fmt +ok fmt 0.060s coverage: 91.4% of statements +$ ++ +
+Second, for more detailed reports, different flags to "go test" can create a coverage profile file, +which the cover program, invoked with "go tool cover", can then analyze. +
+ ++Details on how to generate and analyze coverage statistics can be found by running the commands +
+ ++$ go help testflag +$ go tool cover -help ++ +
+The "go doc" command is deleted.
+Note that the godoc
tool itself is not deleted,
+just the wrapping of it by the go
command.
+All it did was show the documents for a package by package path,
+which godoc itself already does with more flexibility.
+It has therefore been deleted to reduce the number of documentation tools and,
+as part of the restructuring of godoc, encourage better options in future.
+
+Updating: For those who still need the precise functionality of running +
+ ++$ go doc ++ +
+in a directory, the behavior is identical to running +
+ ++$ godoc . ++ +
+The go get
command
+now has a -t
flag that causes it to download the dependencies
+of the tests run by the package, not just those of the package itself.
+By default, as before, dependencies of the tests are not downloaded.
+
+There are a number of significant performance improvements in the standard library; here are a few of them. +
+ +compress/bzip2
+decompresses about 30% faster.
+crypto/des
package
+is about five times faster.
+encoding/json
package
+encodes about 30% faster.
+
+The
+archive/tar
+and
+archive/zip
+packages have had a change to their semantics that may break existing programs.
+The issue is that they both provided an implementation of the
+os.FileInfo
+interface that was not compliant with the specification for that interface.
+In particular, their Name
method returned the full
+path name of the entry, but the interface specification requires that
+the method return only the base name (final path element).
+
+Updating: Since this behavior was newly implemented and +a bit obscure, it is possible that no code depends on the broken behavior. +If there are programs that do depend on it, they will need to be identified +and fixed manually. +
+ +
+There is a new package, encoding
,
+that defines a set of standard encoding interfaces that may be used to
+build custom marshalers and unmarshalers for packages such as
+encoding/xml
,
+encoding/json
,
+and
+encoding/binary
.
+These new interfaces have been used to tidy up some implementations in
+the standard library.
+
+The new interfaces are called
+BinaryMarshaler
,
+BinaryUnmarshaler
,
+TextMarshaler
,
+and
+TextUnmarshaler
.
+Full details are in the documentation for the package
+and a separate design document.
+
+The fmt
package's formatted print
+routines such as Printf
+now allow the data items to be printed to be accessed in arbitrary order
+by using an indexing operation in the formatting specifications.
+Wherever an argument is to be fetched from the argument list for formatting,
+either as the value to be formatted or as a width or specification integer,
+a new optional indexing notation [
n]
+fetches argument n instead.
+The value of n is 1-indexed.
+After such an indexing operating, the next argument to be fetched by normal
+processing will be n+1.
+
+For example, the normal Printf
call
+
+fmt.Sprintf("%c %c %c\n", 'a', 'b', 'c') ++ +
+would create the string "a b c"
, but with indexing operations like this,
+
+fmt.Sprintf("%[3]c %[1]c %c\n", 'a', 'b', 'c') ++ +
+the result is ""c a b"
. The [3]
index accesses the third formatting
+argument, which is 'c'
, [1]
accesses the first, 'a'
,
+and then the next fetch accesses the argument following that one, 'b'
.
+
+The motivation for this feature is programmable format statements to access +the arguments in different order for localization, but it has other uses: +
+ ++log.Printf("trace: value %v of type %[1]T\n", expensiveFunction(a.b[c])) ++ +
+Updating: The change to the syntax of format specifications +is strictly backwards compatible, so it affects no working programs. +
+ +
+The
+text/template
package
+has a couple of changes in Go 1.2, both of which are also mirrored in the
+html/template
package.
+
+First, there are new default functions for comparing basic types. +The functions are listed in this table, which shows their names and +the associated familiar comparison operator. +
+ +Name | Operator | +|
---|---|---|
eq | == |
+|
ne | != |
+|
lt | < |
+|
le | <= |
+|
gt | > |
+|
ge | >= |
+
+These functions behave slightly differently from the corresponding Go operators.
+First, they operate only on basic types (bool
, int
,
+float64
, string
, etc.).
+(Go allows comparison of arrays and structs as well, under some circumstances.)
+Second, values can be compared as long as they are the same sort of value:
+any signed integer value can be compared to any other signed integer value for example. (Go
+does not permit comparing an int8
and an int16
).
+Finally, the eq
function (only) allows comparison of the first
+argument with one or more following arguments. The template in this example,
+
+{{"{{"}}if eq .A 1 2 3 {{"}}"}} equal {{"{{"}}else{{"}}"}} not equal {{"{{"}}end{{"}}"}} ++ +
+reports "equal" if .A
is equal to any of 1, 2, or 3.
+
+The second change is that a small addition to the grammar makes "if else if" chains easier to write. +Instead of writing, +
+ ++{{"{{"}}if eq .A 1{{"}}"}} X {{"{{"}}else{{"}}"}} {{"{{"}}if eq .A 2{{"}}"}} Y {{"{{"}}end{{"}}"}} {{"{{"}}end{{"}}"}} ++ +
+one can fold the second "if" into the "else" and have only one "end", like this: +
+ ++{{"{{"}}if eq .A 1{{"}}"}} X {{"{{"}}else if eq .A 2{{"}}"}} Y {{"{{"}}end{{"}}"}} ++ +
+The two forms are identical in effect; the difference is just in the syntax. +
+ +
+Updating: Neither the "else if" change nor the comparison functions
+affect existing programs. Those that
+already define functions called eq
and so on through a function
+map are unaffected because the associated function map will override the new
+default function definitions.
+
+There are two new packages. +
+ +encoding
package is
+described above.
+image/color/palette
package
+provides standard color palettes.
++The following list summarizes a number of minor changes to the library, mostly additions. +See the relevant package documentation for more information about each change. +
+ +archive/zip
package
+adds the
+DataOffset
accessor
+to return the offset of a file's (possibly compressed) data within the archive.
+bufio
package
+adds Reset
+methods to Reader
and
+Writer
.
+These methods allow the Readers
+and Writers
+to be re-used on new input and output readers and writers, saving
+allocation overhead.
+compress/bzip2
+can now decompress concatenated archives.
+compress/flate
+package adds a Reset
+method on the Writer
,
+to make it possible to reduce allocation when, for instance, constructing an
+archive to hold multiple compressed files.
+compress/gzip
package's
+Writer
type adds a
+Reset
+so it may be reused.
+compress/zlib
package's
+Writer
type adds a
+Reset
+so it may be reused.
+container/heap
package
+adds a Fix
+method to provide a more efficient way to update an item's position in the heap.
+container/list
package
+adds the MoveBefore
+and
+MoveAfter
+methods, which implement the obvious rearrangement.
+crypto/cipher
package
+adds the a new GCM mode (Galois Counter Mode), which is almost always
+used with AES encryption.
+crypto/md5
package
+adds a new Sum
function
+to simplify hashing without sacrificing performance.
+crypto/sha1
package
+adds a new Sum
function.
+crypto/sha256
package
+adds Sum256
+and Sum224
functions.
+crypto/sha512
package
+adds Sum512
and
+Sum384
functions.
+crypto/x509
package
+adds support for reading and writing arbitrary extensions.
+crypto/tls
package adds
+support for TLS 1.1, 1.2 and AES-GCM.
+database/sql
package adds a
+SetMaxOpenConns
+method on DB
to limit the
+number of open connections to the database.
+encoding/csv
package
+now always allows trailing commas on fields.
+encoding/gob
package
+now treats channel and function fields of structures as if they were unexported,
+even if they are not. That is, it ignores them completely. Previously they would
+trigger an error, which could cause unexpected compatibility problems if an
+embedded structure added such a field.
+The package also now supports the generic BinaryMarshaler
and
+BinaryUnmarshaler
interfaces of the
+encoding
package
+described above.
+encoding/json
package
+now will always escape ampersands as "\u0026" when printing strings.
+It will now accept but correct invalid UTF-8 in
+Marshal
+(such input was previously rejected).
+Finally, it now supports the generic encoding interfaces of the
+encoding
package
+described above.
+encoding/xml
package
+now allows attributes stored in pointers to be marshaled.
+It also supports the generic encoding interfaces of the
+encoding
package
+described above through the new
+Marshaler
,
+Unmarshaler
,
+and related
+MarshalerAttr
and
+UnmarshalerAttr
+interfaces.
+The package also adds a
+Flush
method
+to the
+Encoder
+type for use by custom encoders. See the documentation for
+EncodeToken
+to see how to use it.
+flag
package now
+has a Getter
interface
+to allow the value of a flag to be retrieved. Due to the
+Go 1 compatibility guidelines, this method cannot be added to the existing
+Value
+interface, but all the existing standard flag types implement it.
+The package also now exports the CommandLine
+flag set, which holds the flags from the command line.
+go/ast
package's
+SliceExpr
struct
+has a new boolean field, Slice3
, which is set to true
+when representing a slice expression with three indices (two colons).
+The default is false, representing the usual two-index form.
+go/build
package adds
+the AllTags
field
+to the Package
type,
+to make it easier to process build tags.
+image/draw
package now
+exports an interface, Drawer
,
+that wraps the standard Draw
method.
+The Porter-Duff operators now implement this interface, in effect binding an operation to
+the draw operator rather than providing it explicitly.
+Given a paletted image as its destination, the new
+FloydSteinberg
+implementation of the
+Drawer
+interface will use the Floyd-Steinberg error diffusion algorithm to draw the image.
+To create palettes suitable for such processing, the new
+Quantizer
interface
+represents implementations of quantization algorithms that choose a palette
+given a full-color image.
+There are no implementations of this interface in the library.
+image/gif
package
+can now create GIF files using the new
+Encode
+and EncodeAll
+functions.
+Their options argument allows specification of an image
+Quantizer
to use;
+if it is nil
, the generated GIF will use the
+Plan9
+color map (palette) defined in the new
+image/color/palette
package.
+The options also specify a
+Drawer
+to use to create the output image;
+if it is nil
, Floyd-Steinberg error diffusion is used.
+Copy
method of the
+io
package now prioritizes its
+arguments differently.
+If one argument implements WriterTo
+and the other implements ReaderFrom
,
+Copy
will now invoke
+WriterTo
to do the work,
+so that less intermediate buffering is required in general.
+net
package requires cgo by default
+because the host operating system must in general mediate network call setup.
+On some systems, though, it is possible to use the network without cgo, and useful
+to do so, for instance to avoid dynamic linking.
+The new build tag netgo
(off by default) allows the construction of a
+net
package in pure Go on those systems where it is possible.
+net
package adds a new field
+DualStack
to the Dialer
+struct for TCP connection setup using a dual IP stack as described in
+RFC 6555.
+net/http
package will no longer
+transmit cookies that are incorrect according to
+RFC 6265.
+It just logs an error and sends nothing.
+Also,
+the net/http
package's
+ReadResponse
+function now permits the *Request
parameter to be nil
,
+whereupon it assumes a GET request.
+Finally, an HTTP server will now serve HEAD
+requests transparently, without the need for special casing in handler code.
+While serving a HEAD request, writes to a
+Handler
's
+ResponseWriter
+are absorbed by the
+Server
+and the client receives an empty body as required by the HTTP specification.
+os/exec
package's
+Cmd.StdinPipe
method
+returns an io.WriteCloser
, but has changed its concrete
+implementation from *os.File
to an unexported type that embeds
+*os.File
, and it is now safe to close the returned value.
+Before Go 1.2, there was an unavoidable race that this change fixes.
+Code that needs access to the methods of *os.File
can use an
+interface type assertion, such as wc.(interface{ Sync() error })
.
+runtime
package relaxes
+the constraints on finalizer functions in
+SetFinalizer
: the
+actual argument can now be any type that is assignable to the formal type of
+the function, as is the case for any normal function call in Go.
+sort
package has a new
+Stable
function that implements
+stable sorting. It is less efficient than the normal sort algorithm, however.
+strings
package adds
+an IndexByte
+function for consistency with the bytes
package.
+sync/atomic
package
+adds a new set of swap functions that atomically exchange the argument with the
+value stored in the pointer, returning the old value.
+The functions are
+SwapInt32
,
+SwapInt64
,
+SwapUint32
,
+SwapUint64
,
+SwapUintptr
,
+and
+SwapPointer
,
+which swaps an unsafe.Pointer
.
+syscall
package now implements
+Sendfile
for Darwin.
+testing
package
+now exports the TB
interface.
+It records the methods in common with the
+T
+and
+B
types,
+to make it easier to share code between tests and benchmarks.
+Also, the
+AllocsPerRun
+function now quantizes the return value to an integer (although it
+still has type float64
), to round off any error caused by
+initialization and make the result more repeatable.
+text/template
package
+now automatically dereferences pointer values when evaluating the arguments
+to "escape" functions such as "html", to bring the behavior of such functions
+in agreement with that of other printing functions such as "printf".
+time
package, the
+Parse
function
+and
+Format
+method
+now handle time zone offsets with seconds, such as in the historical
+date "1871-01-01T05:33:02+00:34:08".
+Also, pattern matching in the formats for those routines is stricter: a non-lowercase letter
+must now follow the standard words such as "Jan" and "Mon".
+unicode
package
+adds In
,
+a nicer-to-use but equivalent version of the original
+IsOneOf
,
+to see whether a character is a member of a Unicode category.
++The latest Go release, version 1.3, arrives six months after 1.2, +and contains no language changes. +It focuses primarily on implementation work, providing +precise garbage collection, +a major refactoring of the compiler toolchain that results in +faster builds, especially for large projects, +significant performance improvements across the board, +and support for DragonFly BSD, Solaris, Plan 9 and Google's Native Client architecture (NaCl). +It also has an important refinement to the memory model regarding synchronization. +As always, Go 1.3 keeps the promise +of compatibility, +and almost everything +will continue to compile and run without change when moved to 1.3. +
+ ++Microsoft stopped supporting Windows 2000 in 2010. +Since it has implementation difficulties +regarding exception handling (signals in Unix terminology), +as of Go 1.3 it is not supported by Go either. +
+ +
+Go 1.3 now includes experimental support for DragonFly BSD on the amd64
(64-bit x86) and 386
(32-bit x86) architectures.
+It uses DragonFly BSD 3.6 or above.
+
+It was not announced at the time, but since the release of Go 1.2, support for Go on FreeBSD +requires FreeBSD 8 or above. +
+ +
+As of Go 1.3, support for Go on FreeBSD requires that the kernel be compiled with the
+COMPAT_FREEBSD32
flag configured.
+
+In concert with the switch to EABI syscalls for ARM platforms, Go 1.3 will run only on FreeBSD 10. +The x86 platforms, 386 and amd64, are unaffected. +
+ +
+Support for the Native Client virtual machine architecture has returned to Go with the 1.3 release.
+It runs on the 32-bit Intel architectures (GOARCH=386
) and also on 64-bit Intel, but using
+32-bit pointers (GOARCH=amd64p32
).
+There is not yet support for Native Client on ARM.
+Note that this is Native Client (NaCl), not Portable Native Client (PNaCl).
+Details about Native Client are here;
+how to set up the Go version is described here.
+
+As of Go 1.3, support for Go on NetBSD requires NetBSD 6.0 or above. +
+ ++As of Go 1.3, support for Go on OpenBSD requires OpenBSD 5.5 or above. +
+ +
+Go 1.3 now includes experimental support for Plan 9 on the 386
(32-bit x86) architecture.
+It requires the Tsemacquire
syscall, which has been in Plan 9 since June, 2012.
+
+Go 1.3 now includes experimental support for Solaris on the amd64
(64-bit x86) architecture.
+It requires illumos, Solaris 11 or above.
+
+The Go 1.3 memory model adds a new rule +concerning sending and receiving on buffered channels, +to make explicit that a buffered channel can be used as a simple +semaphore, using a send into the +channel to acquire and a receive from the channel to release. +This is not a language change, just a clarification about an expected property of communication. +
+ ++Go 1.3 has changed the implementation of goroutine stacks away from the old, +"segmented" model to a contiguous model. +When a goroutine needs more stack +than is available, its stack is transferred to a larger single block of memory. +The overhead of this transfer operation amortizes well and eliminates the old "hot spot" +problem when a calculation repeatedly steps across a segment boundary. +Details including performance numbers are in this +design document. +
+ ++For a while now, the garbage collector has been precise when examining +values in the heap; the Go 1.3 release adds equivalent precision to values on the stack. +This means that a non-pointer Go value such as an integer will never be mistaken for a +pointer and prevent unused memory from being reclaimed. +
+ ++Starting with Go 1.3, the runtime assumes that values with pointer type +contain pointers and other values do not. +This assumption is fundamental to the precise behavior of both stack expansion +and garbage collection. +Programs that use package unsafe +to store integers in pointer-typed values are illegal and will crash if the runtime detects the behavior. +Programs that use package unsafe to store pointers +in integer-typed values are also illegal but more difficult to diagnose during execution. +Because the pointers are hidden from the runtime, a stack expansion or garbage collection +may reclaim the memory they point at, creating +dangling pointers. +
+ +
+Updating: Code that uses unsafe.Pointer
to convert
+an integer-typed value held in memory into a pointer is illegal and must be rewritten.
+Such code can be identified by go vet
.
+
+Iterations over small maps no longer happen in a consistent order. +Go 1 defines that “The iteration order over maps +is not specified and is not guaranteed to be the same from one iteration to the next.” +To keep code from depending on map iteration order, +Go 1.0 started each map iteration at a random index in the map. +A new map implementation introduced in Go 1.1 neglected to randomize +iteration for maps with eight or fewer entries, although the iteration order +can still vary from system to system. +This has allowed people to write Go 1.1 and Go 1.2 programs that +depend on small map iteration order and therefore only work reliably on certain systems. +Go 1.3 reintroduces random iteration for small maps in order to flush out these bugs. +
+ ++Updating: If code assumes a fixed iteration order for small maps, +it will break and must be rewritten not to make that assumption. +Because only small maps are affected, the problem arises most often in tests. +
+ +
+As part of the general overhaul to
+the Go linker, the compilers and linkers have been refactored.
+The linker is still a C program, but now the instruction selection phase that
+was part of the linker has been moved to the compiler through the creation of a new
+library called liblink
.
+By doing instruction selection only once, when the package is first compiled,
+this can speed up compilation of large projects significantly.
+
+Updating: Although this is a major internal change, it should have no +effect on programs. +
+ ++GCC release 4.9 will contain the Go 1.2 (not 1.3) version of gccgo. +The release schedules for the GCC and Go projects do not coincide, +which means that 1.3 will be available in the development branch but +that the next GCC release, 4.10, will likely have the Go 1.4 version of gccgo. +
+ +
+The cmd/go
command has several new
+features.
+The go run
and
+go test
subcommands
+support a new -exec
option to specify an alternate
+way to run the resulting binary.
+Its immediate purpose is to support NaCl.
+
+The test coverage support of the go test
+subcommand now automatically sets the coverage mode to -atomic
+when the race detector is enabled, to eliminate false reports about unsafe
+access to coverage counters.
+
+The go test
subcommand
+now always builds the package, even if it has no test files.
+Previously, it would do nothing if no test files were present.
+
+The go build
subcommand
+supports a new -i
option to install dependencies
+of the specified target, but not the target itself.
+
+Cross compiling with cgo
enabled
+is now supported.
+The CC_FOR_TARGET and CXX_FOR_TARGET environment
+variables are used when running all.bash to specify the cross compilers
+for C and C++ code, respectively.
+
+Finally, the go command now supports packages that import Objective-C
+files (suffixed .m
) through cgo.
+
+The cmd/cgo
command,
+which processes import "C"
declarations in Go packages,
+has corrected a serious bug that may cause some packages to stop compiling.
+Previously, all pointers to incomplete struct types translated to the Go type *[0]byte
,
+with the effect that the Go compiler could not diagnose passing one kind of struct pointer
+to a function expecting another.
+Go 1.3 corrects this mistake by translating each different
+incomplete struct to a different named type.
+
+Given the C declaration typedef struct S T
for an incomplete struct S
,
+some Go code used this bug to refer to the types C.struct_S
and C.T
interchangeably.
+Cgo now explicitly allows this use, even for completed struct types.
+However, some Go code also used this bug to pass (for example) a *C.FILE
+from one package to another.
+This is not legal and no longer works: in general Go packages
+should avoid exposing C types and names in their APIs.
+
+Updating: Code confusing pointers to incomplete types or
+passing them across package boundaries will no longer compile
+and must be rewritten.
+If the conversion is correct and must be preserved,
+use an explicit conversion via unsafe.Pointer
.
+
+For Go programs that use SWIG, SWIG version 3.0 is now required.
+The cmd/go
command will now link the
+SWIG generated object files directly into the binary, rather than
+building and linking with a shared library.
+
+In the gc toolchain, the assemblers now use the
+same command-line flag parsing rules as the Go flag package, a departure
+from the traditional Unix flag parsing.
+This may affect scripts that invoke the tool directly.
+For example,
+go tool 6a -SDfoo
must now be written
+go tool 6a -S -D foo
.
+(The same change was made to the compilers and linkers in Go 1.1.)
+
+When invoked with the -analysis
flag,
+godoc
+now performs sophisticated static
+analysis of the code it indexes.
+The results of analysis are presented in both the source view and the
+package documentation view, and include the call graph of each package
+and the relationships between
+definitions and references,
+types and their methods,
+interfaces and their implementations,
+send and receive operations on channels,
+functions and their callers, and
+call sites and their callees.
+
+The program misc/benchcmp
that compares
+performance across benchmarking runs has been rewritten.
+Once a shell and awk script in the main repository, it is now a Go program in the go.tools
repo.
+Documentation is here.
+
+For the few of us that build Go distributions, the tool misc/dist
has been
+moved and renamed; it now lives in misc/makerelease
, still in the main repository.
+
+The performance of Go binaries for this release has improved in many cases due to changes +in the runtime and garbage collection, plus some changes to libraries. +Significant instances include: +
+ +regexp
+is now significantly faster for certain simple expressions due to the implementation of
+a second, one-pass execution engine.
+The choice of which engine to use is automatic;
+the details are hidden from the user.
++Also, the runtime now includes in stack dumps how long a goroutine has been blocked, +which can be useful information when debugging deadlocks or performance issues. +
+ +
+A new package debug/plan9obj
was added to the standard library.
+It implements access to Plan 9 a.out object files.
+
+A previous bug in crypto/tls
+made it possible to skip verification in TLS inadvertently.
+In Go 1.3, the bug is fixed: one must specify either ServerName or
+InsecureSkipVerify, and if ServerName is specified it is enforced.
+This may break existing code that incorrectly depended on insecure
+behavior.
+
+There is an important new type added to the standard library: sync.Pool
.
+It provides an efficient mechanism for implementing certain types of caches whose memory
+can be reclaimed automatically by the system.
+
+The testing
package's benchmarking helper,
+B
, now has a
+RunParallel
method
+to make it easier to run benchmarks that exercise multiple CPUs.
+
+Updating: The crypto/tls fix may break existing code, but such +code was erroneous and should be updated. +
+ ++The following list summarizes a number of minor changes to the library, mostly additions. +See the relevant package documentation for more information about each change. +
+ +crypto/tls
package,
+a new DialWithDialer
+function lets one establish a TLS connection using an existing dialer, making it easier
+to control dial options such as timeouts.
+The package also now reports the TLS version used by the connection in the
+ConnectionState
+struct.
+CreateCertificate
+function of the crypto/tls
package
+now supports parsing (and elsewhere, serialization) of PKCS #10 certificate
+signature requests.
+fmt
package now define %F
+as a synonym for %f
when printing floating-point values.
+math/big
package's
+Int
and
+Rat
types
+now implement
+encoding.TextMarshaler
and
+encoding.TextUnmarshaler
.
+Pow
,
+now specifies the behavior when the first argument is zero.
+It was undefined before.
+The details are in the documentation for the function.
+net/http
package now exposes the
+properties of a TLS connection used to make a client request in the new
+Response.TLS
field.
+net/http
package now
+allows setting an optional server error logger
+with Server.ErrorLog
.
+The default is still that all errors go to stderr.
+net/http
package now
+supports disabling HTTP keep-alive connections on the server
+with Server.SetKeepAlivesEnabled
.
+The default continues to be that the server does keep-alive (reuses
+connections for multiple requests) by default.
+Only resource-constrained servers or those in the process of graceful
+shutdown will want to disable them.
+net/http
package adds an optional
+Transport.TLSHandshakeTimeout
+setting to cap the amount of time HTTP client requests will wait for
+TLS handshakes to complete.
+It's now also set by default
+on DefaultTransport
.
+net/http
package's
+DefaultTransport
,
+used by the HTTP client code, now
+enables TCP
+keep-alives by default.
+Other Transport
+values with a nil Dial
field continue to function the same
+as before: no TCP keep-alives are used.
+net/http
package
+now enables TCP
+keep-alives for incoming server requests when
+ListenAndServe
+or
+ListenAndServeTLS
+are used.
+When a server is started otherwise, TCP keep-alives are not enabled.
+net/http
package now
+provides an
+optional Server.ConnState
+callback to hook various phases of a server connection's lifecycle
+(see ConnState
).
+This can be used to implement rate limiting or graceful shutdown.
+net/http
package's HTTP
+client now has an
+optional Client.Timeout
+field to specify an end-to-end timeout on requests made using the
+client.
+net/http
package's
+Request.ParseMultipartForm
+method will now return an error if the body's Content-Type
+is not multipart/form-data
.
+Prior to Go 1.3 it would silently fail and return nil
.
+Code that relies on the previous behavior should be updated.
+net
package,
+the Dialer
struct now
+has a KeepAlive
option to specify a keep-alive period for the connection.
+net/http
package's
+Transport
+now closes Request.Body
+consistently, even on error.
+os/exec
package now implements
+what the documentation has always said with regard to relative paths for the binary.
+In particular, it only calls LookPath
+when the binary's file name contains no path separators.
+SetMapIndex
+function in the reflect
package
+no longer panics when deleting from a nil
map.
+runtime.Goexit
+and all other goroutines finish execution, the program now always crashes,
+reporting a detected deadlock.
+Earlier versions of Go handled this situation inconsistently: most instances
+were reported as deadlocks, but some trivial cases exited cleanly instead.
+debug.WriteHeapDump
+that writes out a description of the heap.
+CanBackquote
+function in the strconv
package
+now considers the DEL
character, U+007F
, to be
+non-printing.
+syscall
package now provides
+SendmsgN
+as an alternate version of
+Sendmsg
+that returns the number of bytes written.
+syscall
package now
+supports the cdecl calling convention through the addition of a new function
+NewCallbackCDecl
+alongside the existing function
+NewCallback
.
+testing
package now
+diagnoses tests that call panic(nil)
, which are almost always erroneous.
+Also, tests now write profiles (if invoked with profiling flags) even on failure.
+unicode
package and associated
+support throughout the system has been upgraded from
+Unicode 6.2.0 to Unicode 6.3.0.
++The latest Go release, version 1.4, arrives as scheduled six months after 1.3. +
+ +
+It contains only one tiny language change,
+in the form of a backwards-compatible simple variant of for
-range
loop,
+and a possibly breaking change to the compiler involving methods on pointers-to-pointers.
+
+The release focuses primarily on implementation work, improving the garbage collector
+and preparing the ground for a fully concurrent collector to be rolled out in the
+next few releases.
+Stacks are now contiguous, reallocated when necessary rather than linking on new
+"segments";
+this release therefore eliminates the notorious "hot stack split" problem.
+There are some new tools available including support in the go
command
+for build-time source code generation.
+The release also adds support for ARM processors on Android and Native Client (NaCl)
+and for AMD64 on Plan 9.
+
+As always, Go 1.4 keeps the promise +of compatibility, +and almost everything +will continue to compile and run without change when moved to 1.4. +
+ +
+Up until Go 1.3, for
-range
loop had two forms
+
+for i, v := range x { + ... +} ++ +
+and +
+ ++for i := range x { + ... +} ++ +
+If one was not interested in the loop values, only the iteration itself, it was still
+necessary to mention a variable (probably the blank identifier, as in
+for
_
=
range
x
), because
+the form
+
+for range x { + ... +} ++ +
+was not syntactically permitted. +
+ ++This situation seemed awkward, so as of Go 1.4 the variable-free form is now legal. +The pattern arises rarely but the code can be cleaner when it does. +
+ +
+Updating: The change is strictly backwards compatible to existing Go
+programs, but tools that analyze Go parse trees may need to be modified to accept
+this new form as the
+Key
field of RangeStmt
+may now be nil
.
+
+Given these declarations, +
+ ++type T int +func (T) M() {} +var x **T ++ +
+both gc
and gccgo
accepted the method call
+
+x.M() ++ +
+which is a double dereference of the pointer-to-pointer x
.
+The Go specification allows a single dereference to be inserted automatically,
+but not two, so this call is erroneous according to the language definition.
+It has therefore been disallowed in Go 1.4, which is a breaking change,
+although very few programs will be affected.
+
+Updating: Code that depends on the old, erroneous behavior will no longer +compile but is easy to fix by adding an explicit dereference. +
+ +
+Go 1.4 can build binaries for ARM processors running the Android operating system.
+It can also build a .so
library that can be loaded by an Android application
+using the supporting packages in the mobile subrepository.
+A brief description of the plans for this experimental port are available
+here.
+
+The previous release introduced Native Client (NaCl) support for the 32-bit x86
+(GOARCH=386
)
+and 64-bit x86 using 32-bit pointers (GOARCH=amd64p32).
+The 1.4 release adds NaCl support for ARM (GOARCH=arm).
+
+This release adds support for the Plan 9 operating system on AMD64 processors,
+provided the kernel supports the nsec
system call and uses 4K pages.
+
+The unsafe
package allows one
+to defeat Go's type system by exploiting internal details of the implementation
+or machine representation of data.
+It was never explicitly specified what use of unsafe
meant
+with respect to compatibility as specified in the
+Go compatibility guidelines.
+The answer, of course, is that we can make no promise of compatibility
+for code that does unsafe things.
+
+We have clarified this situation in the documentation included in the release.
+The Go compatibility guidelines and the
+docs for the unsafe
package
+are now explicit that unsafe code is not guaranteed to remain compatible.
+
+Updating: Nothing technical has changed; this is just a clarification +of the documentation. +
+ + ++Prior to Go 1.4, the runtime (garbage collector, concurrency support, interface management, +maps, slices, strings, ...) was mostly written in C, with some assembler support. +In 1.4, much of the code has been translated to Go so that the garbage collector can scan +the stacks of programs in the runtime and get accurate information about what variables +are active. +This change was large but should have no semantic effect on programs. +
+ ++This rewrite allows the garbage collector in 1.4 to be fully precise, +meaning that it is aware of the location of all active pointers in the program. +This means the heap will be smaller as there will be no false positives keeping non-pointers alive. +Other related changes also reduce the heap size, which is smaller by 10%-30% overall +relative to the previous release. +
+ ++A consequence is that stacks are no longer segmented, eliminating the "hot split" problem. +When a stack limit is reached, a new, larger stack is allocated, all active frames for +the goroutine are copied there, and any pointers into the stack are updated. +Performance can be noticeably better in some cases and is always more predictable. +Details are available in the design document. +
+ ++The use of contiguous stacks means that stacks can start smaller without triggering performance issues, +so the default starting size for a goroutine's stack in 1.4 has been reduced from 8192 bytes to 2048 bytes. +
+ ++As preparation for the concurrent garbage collector scheduled for the 1.5 release, +writes to pointer values in the heap are now done by a function call, +called a write barrier, rather than directly from the function updating the value. +In this next release, this will permit the garbage collector to mediate writes to the heap while it is running. +This change has no semantic effect on programs in 1.4, but was +included in the release to test the compiler and the resulting performance. +
+ ++The implementation of interface values has been modified. +In earlier releases, the interface contained a word that was either a pointer or a one-word +scalar value, depending on the type of the concrete object stored. +This implementation was problematical for the garbage collector, +so as of 1.4 interface values always hold a pointer. +In running programs, most interface values were pointers anyway, +so the effect is minimal, but programs that store integers (for example) in +interfaces will see more allocations. +
+ +
+As of Go 1.3, the runtime crashes if it finds a memory word that should contain
+a valid pointer but instead contains an obviously invalid pointer (for example, the value 3).
+Programs that store integers in pointer values may run afoul of this check and crash.
+In Go 1.4, setting the GODEBUG
variable
+invalidptr=0
disables
+the crash as a workaround, but we cannot guarantee that future releases will be
+able to avoid the crash; the correct fix is to rewrite code not to alias integers and pointers.
+
+The language accepted by the assemblers cmd/5a
, cmd/6a
+and cmd/8a
has had several changes,
+mostly to make it easier to deliver type information to the runtime.
+
+First, the textflag.h
file that defines flags for TEXT
directives
+has been copied from the linker source directory to a standard location so it can be
+included with the simple directive
+
+#include "textflag.h" ++ +
+The more important changes are in how assembler source can define the necessary
+type information.
+For most programs it will suffice to move data
+definitions (DATA
and GLOBL
directives)
+out of assembly into Go files
+and to write a Go declaration for each assembly function.
+The assembly document describes what to do.
+
+Updating:
+Assembly files that include textflag.h
from its old
+location will still work, but should be updated.
+For the type information, most assembly routines will need no change,
+but all should be examined.
+Assembly source files that define data,
+functions with non-empty stack frames, or functions that return pointers
+need particular attention.
+A description of the necessary (but simple) changes
+is in the assembly document.
+
+More information about these changes is in the assembly document. +
+ ++The release schedules for the GCC and Go projects do not coincide. +GCC release 4.9 contains the Go 1.2 version of gccgo. +The next release, GCC 5, will likely have the Go 1.4 version of gccgo. +
+ ++Go's package system makes it easy to structure programs into components with clean boundaries, +but there are only two forms of access: local (unexported) and global (exported). +Sometimes one wishes to have components that are not exported, +for instance to avoid acquiring clients of interfaces to code that is part of a public repository +but not intended for use outside the program to which it belongs. +
+ +
+The Go language does not have the power to enforce this distinction, but as of Go 1.4 the
+go
command introduces
+a mechanism to define "internal" packages that may not be imported by packages outside
+the source subtree in which they reside.
+
+To create such a package, place it in a directory named internal
or in a subdirectory of a directory
+named internal.
+When the go
command sees an import of a package with internal
in its path,
+it verifies that the package doing the import
+is within the tree rooted at the parent of the internal
directory.
+For example, a package .../a/b/c/internal/d/e/f
+can be imported only by code in the directory tree rooted at .../a/b/c
.
+It cannot be imported by code in .../a/b/g
or in any other repository.
+
+For Go 1.4, the internal package mechanism is enforced for the main Go repository; +from 1.5 and onward it will be enforced for any repository. +
+ ++Full details of the mechanism are in +the design document. +
+ +
+Code often lives in repositories hosted by public services such as github.com
,
+meaning that the import paths for packages begin with the name of the hosting service,
+github.com/rsc/pdf
for example.
+One can use
+an existing mechanism
+to provide a "custom" or "vanity" import path such as
+rsc.io/pdf
, but
+that creates two valid import paths for the package.
+That is a problem: one may inadvertently import the package through the two
+distinct paths in a single program, which is wasteful;
+miss an update to a package because the path being used is not recognized to be
+out of date;
+or break clients using the old path by moving the package to a different hosting service.
+
+Go 1.4 introduces an annotation for package clauses in Go source that identify a canonical
+import path for the package.
+If an import is attempted using a path that is not canonical,
+the go
command
+will refuse to compile the importing package.
+
+The syntax is simple: put an identifying comment on the package line. +For our example, the package clause would read: +
+ ++package pdf // import "rsc.io/pdf" ++ +
+With this in place,
+the go
command will
+refuse to compile a package that imports github.com/rsc/pdf
,
+ensuring that the code can be moved without breaking users.
+
+The check is at build time, not download time, so if go
get
+fails because of this check, the mis-imported package has been copied to the local machine
+and should be removed manually.
+
+To complement this new feature, a check has been added at update time to verify
+that the local package's remote repository matches that of its custom import.
+The go
get
-u
command will fail to
+update a package if its remote repository has changed since it was first
+downloaded.
+The new -f
flag overrides this check.
+
+Further information is in +the design document. +
+ +
+The Go project subrepositories (code.google.com/p/go.tools
and so on)
+are now available under custom import paths replacing code.google.com/p/go.
with golang.org/x/
,
+as in golang.org/x/tools
.
+We will add canonical import comments to the code around June 1, 2015,
+at which point Go 1.4 and later will stop accepting the old code.google.com
paths.
+
+Updating: All code that imports from subrepositories should change
+to use the new golang.org
paths.
+Go 1.0 and later can resolve and import the new paths, so updating will not break
+compatibility with older releases.
+Code that has not updated will stop compiling with Go 1.4 around June 1, 2015.
+
+The go
command has a new subcommand,
+go generate
,
+to automate the running of tools to generate source code before compilation.
+For example, it can be used to run the yacc
+compiler-compiler on a .y
file to produce the Go source file implementing the grammar,
+or to automate the generation of String
methods for typed constants using the new
+stringer
+tool in the golang.org/x/tools
subrepository.
+
+For more information, see the +design document. +
+ +
+Build constraints, also known as build tags, control compilation by including or excluding files
+(see the documentation /go/build
).
+Compilation can also be controlled by the name of the file itself by "tagging" the file with
+a suffix (before the .go
or .s
extension) with an underscore
+and the name of the architecture or operating system.
+For instance, the file gopher_arm.go
will only be compiled if the target
+processor is an ARM.
+
+Before Go 1.4, a file called just arm.go
was similarly tagged, but this behavior
+can break sources when new architectures are added, causing files to suddenly become tagged.
+In 1.4, therefore, a file will be tagged in this manner only if the tag (architecture or operating
+system name) is preceded by an underscore.
+
+Updating: Packages that depend on the old behavior will no longer compile correctly.
+Files with names like windows.go
or amd64.go
should either
+have explicit build tags added to the source or be renamed to something like
+os_windows.go
or support_amd64.go
.
+
+There were a number of minor changes to the
+cmd/go
+command worth noting.
+
cgo
is being used to build the package,
+the go
command now refuses to compile C source files,
+since the relevant C compilers
+(6c
etc.)
+are intended to be removed from the installation in some future release.
+(They are used today only to build part of the runtime.)
+It is difficult to use them correctly in any case, so any extant uses are likely incorrect,
+so we have disabled them.
+go
test
+subcommand has a new flag, -o
, to set the name of the resulting binary,
+corresponding to the same flag in other subcommands.
+The non-functional -file
flag has been removed.
+go
test
+subcommand will compile and link all *_test.go
files in the package,
+even when there are no Test
functions in them.
+It previously ignored such files.
+go
build
+subcommand's
+-a
flag has been changed for non-development installations.
+For installations running a released distribution, the -a
flag will no longer
+rebuild the standard library and commands, to avoid overwriting the installation's files.
+
+In the main Go source repository, the source code for the packages was kept in
+the directory src/pkg
, which made sense but differed from
+other repositories, including the Go subrepositories.
+In Go 1.4, the pkg
level of the source tree is now gone, so for example
+the fmt
package's source, once kept in
+directory src/pkg/fmt
, now lives one level higher in src/fmt
.
+
+Updating: Tools like godoc
that discover source code
+need to know about the new location. All tools and services maintained by the Go team
+have been updated.
+
+Due to runtime changes in this release, Go 1.4 requires SWIG 3.0.3. +
+ +
+The standard repository's top-level misc
directory used to contain
+Go support for editors and IDEs: plugins, initialization scripts and so on.
+Maintaining these was becoming time-consuming
+and needed external help because many of the editors listed were not used by
+members of the core team.
+It also required us to make decisions about which plugin was best for a given
+editor, even for editors we do not use.
+
+The Go community at large is much better suited to managing this information. +In Go 1.4, therefore, this support has been removed from the repository. +Instead, there is a curated, informative list of what's available on +a wiki page. +
+ ++Most programs will run about the same speed or slightly faster in 1.4 than in 1.3; +some will be slightly slower. +There are many changes, making it hard to be precise about what to expect. +
+ ++As mentioned above, much of the runtime was translated to Go from C, +which led to some reduction in heap sizes. +It also improved performance slightly because the Go compiler is better +at optimization, due to things like inlining, than the C compiler used to build +the runtime. +
+ ++The garbage collector was sped up, leading to measurable improvements for +garbage-heavy programs. +On the other hand, the new write barriers slow things down again, typically +by about the same amount but, depending on their behavior, some programs +may be somewhat slower or faster. +
+ ++Library changes that affect performance are documented below. +
+ ++There are no new packages in this release. +
+ +
+The Scanner
type in the
+bufio
package
+has had a bug fixed that may require changes to custom
+split functions
.
+The bug made it impossible to generate an empty token at EOF; the fix
+changes the end conditions seen by the split function.
+Previously, scanning stopped at EOF if there was no more data.
+As of 1.4, the split function will be called once at EOF after input is exhausted,
+so the split function can generate a final empty token
+as the documentation already promised.
+
+Updating: Custom split functions may need to be modified to +handle empty tokens at EOF as desired. +
+ +
+The syscall
package is now frozen except
+for changes needed to maintain the core repository.
+In particular, it will no longer be extended to support new or different system calls
+that are not used by the core.
+The reasons are described at length in a
+separate document.
+
+A new subrepository, golang.org/x/sys, +has been created to serve as the location for new developments to support system +calls on all kernels. +It has a nicer structure, with three packages that each hold the implementation of +system calls for one of +Unix, +Windows and +Plan 9. +These packages will be curated more generously, accepting all reasonable changes +that reflect kernel interfaces in those operating systems. +See the documentation and the article mentioned above for more information. +
+ +
+Updating: Existing programs are not affected as the syscall
+package is largely unchanged from the 1.3 release.
+Future development that requires system calls not in the syscall
package
+should build on golang.org/x/sys
instead.
+
+The following list summarizes a number of minor changes to the library, mostly additions. +See the relevant package documentation for more information about each change. +
+ +archive/zip
package's
+Writer
now supports a
+Flush
method.
+compress/flate
,
+compress/gzip
,
+and compress/zlib
+packages now support a Reset
method
+for the decompressors, allowing them to reuse buffers and improve performance.
+The compress/gzip
package also has a
+Multistream
method to control support
+for multistream files.
+crypto
package now has a
+Signer
interface, implemented by the
+PrivateKey
types in
+crypto/ecdsa
and
+crypto/rsa
.
+crypto/tls
package
+now supports ALPN as defined in RFC 7301.
+crypto/tls
package
+now supports programmatic selection of server certificates
+through the new CertificateForName
function
+of the Config
struct.
+database/sql
package can now list all registered
+Drivers
.
+debug/dwarf
package now supports
+UnspecifiedType
s.
+encoding/asn1
package,
+optional elements with a default value will now only be omitted if they have that value.
+encoding/csv
package no longer
+quotes empty strings but does quote the end-of-data marker \.
(backslash dot).
+This is permitted by the definition of CSV and allows it to work better with Postgres.
+encoding/gob
package has been rewritten to eliminate
+the use of unsafe operations, allowing it to be used in environments that do not permit use of the
+unsafe
package.
+For typical uses it will be 10-30% slower, but the delta is dependent on the type of the data and
+in some cases, especially involving arrays, it can be faster.
+There is no functional change.
+encoding/xml
package's
+Decoder
can now report its input offset.
+fmt
package,
+formatting of pointers to maps has changed to be consistent with that of pointers
+to structs, arrays, and so on.
+For instance, &map[string]int{"one":
1}
now prints by default as
+&map[one:
1]
rather than as a hexadecimal pointer value.
+image
package's
+Image
+implementations like
+RGBA
and
+Gray
have specialized
+RGBAAt
and
+GrayAt
methods alongside the general
+At
method.
+image/png
package now has an
+Encoder
+type to control the compression level used for encoding.
+math
package now has a
+Nextafter32
function.
+net/http
package's
+Request
type
+has a new BasicAuth
method
+that returns the username and password from authenticated requests using the
+HTTP Basic Authentication
+Scheme.
+net/http
package's
+Transport
type
+has a new DialTLS
hook
+that allows customizing the behavior of outbound TLS connections.
+net/http/httputil
package's
+ReverseProxy
type
+has a new field,
+ErrorLog
, that
+provides user control of logging.
+os
package
+now implements symbolic links on the Windows operating system
+through the Symlink
function.
+Other operating systems already have this functionality.
+There is also a new Unsetenv
function.
+reflect
package's
+Type
interface
+has a new method, Comparable
,
+that reports whether the type implements general comparisons.
+reflect
package, the
+Value
interface is now three instead of four words
+because of changes to the implementation of interfaces in the runtime.
+This saves memory but has no semantic effect.
+runtime
package
+now implements monotonic clocks on Windows,
+as it already did for the other systems.
+runtime
package's
+Mallocs
counter
+now counts very small allocations that were missed in Go 1.3.
+This may break tests using ReadMemStats
+or AllocsPerRun
+due to the more accurate answer.
+runtime
package,
+an array PauseEnd
+has been added to the
+MemStats
+and GCStats
structs.
+This array is a circular buffer of times when garbage collection pauses ended.
+The corresponding pause durations are already recorded in
+PauseNs
+runtime/race
package
+now supports FreeBSD, which means the
+go
command's -race
+flag now works on FreeBSD.
+sync/atomic
package
+has a new type, Value
.
+Value
provides an efficient mechanism for atomic loads and
+stores of values of arbitrary type.
+syscall
package's
+implementation on Linux, the
+Setuid
+and Setgid
have been disabled
+because those system calls operate on the calling thread, not the whole process, which is
+different from other platforms and not the expected result.
+testing
package
+has a new facility to provide more control over running a set of tests.
+If the test code contains a function
+
+func TestMain(m *testing.M
)
+
+
+that function will be called instead of running the tests directly.
+The M
struct contains methods to access and run the tests.
+testing
package,
+a new Coverage
+function reports the current test coverage fraction,
+enabling individual tests to report how much they are contributing to the
+overall coverage.
+text/scanner
package's
+Scanner
type
+has a new function,
+IsIdentRune
,
+allowing one to control the definition of an identifier when scanning.
+text/template
package's boolean
+functions eq
, lt
, and so on have been generalized to allow comparison
+of signed and unsigned integers, simplifying their use in practice.
+(Previously one could only compare values of the same signedness.)
+All negative values compare less than all unsigned values.
+time
package now uses the standard symbol for the micro prefix,
+the micro symbol (U+00B5 'µ'), to print microsecond durations.
+ParseDuration
still accepts us
+but the package no longer prints microseconds as us
.
++The latest Go release, version 1.5, +is a significant release, including major architectural changes to the implementation. +Despite that, we expect almost all Go programs to continue to compile and run as before, +because the release still maintains the Go 1 promise +of compatibility. +
+ ++The biggest developments in the implementation are: +
+ +GOMAXPROCS
set to the
+number of cores available; in prior releases it defaulted to 1.
+go
command now provides experimental
+support for "vendoring" external dependencies.
+go tool trace
command supports fine-grained
+tracing of program execution.
+go doc
command (distinct from godoc
)
+is customized for command-line use.
++These and a number of other changes to the implementation and tools +are discussed below. +
+ ++The release also contains one small language change involving map literals. +
+ ++Finally, the timing of the release +strays from the usual six-month interval, +both to provide more time to prepare this major release and to shift the schedule thereafter to +time the release dates more conveniently. +
+ ++Due to an oversight, the rule that allowed the element type to be elided from slice literals was not +applied to map keys. +This has been corrected in Go 1.5. +An example will make this clear. +As of Go 1.5, this map literal, +
+ ++m := map[Point]string{ + Point{29.935523, 52.891566}: "Persepolis", + Point{-25.352594, 131.034361}: "Uluru", + Point{37.422455, -122.084306}: "Googleplex", +} ++ +
+may be written as follows, without the Point
type listed explicitly:
+
+m := map[Point]string{ + {29.935523, 52.891566}: "Persepolis", + {-25.352594, 131.034361}: "Uluru", + {37.422455, -122.084306}: "Googleplex", +} ++ +
+The compiler and runtime are now implemented in Go and assembler, without C.
+The only C source left in the tree is related to testing or to cgo
.
+There was a C compiler in the tree in 1.4 and earlier.
+It was used to build the runtime; a custom compiler was necessary in part to
+guarantee the C code would work with the stack management of goroutines.
+Since the runtime is in Go now, there is no need for this C compiler and it is gone.
+Details of the process to eliminate C are discussed elsewhere.
+
+The conversion from C was done with the help of custom tools created for the job. +Most important, the compiler was actually moved by automatic translation of +the C code into Go. +It is in effect the same program in a different language. +It is not a new implementation +of the compiler so we expect the process will not have introduced new compiler +bugs. +An overview of this process is available in the slides for +this presentation. +
+ +
+Independent of but encouraged by the move to Go, the names of the tools have changed.
+The old names 6g
, 8g
and so on are gone; instead there
+is just one binary, accessible as go
tool
compile
,
+that compiles Go source into binaries suitable for the architecture and operating system
+specified by $GOARCH
and $GOOS
.
+Similarly, there is now one linker (go
tool
link
)
+and one assembler (go
tool
asm
).
+The linker was translated automatically from the old C implementation,
+but the assembler is a new native Go implementation discussed
+in more detail below.
+
+Similar to the drop of the names 6g
, 8g
, and so on,
+the output of the compiler and assembler are now given a plain .o
suffix
+rather than .8
, .6
, etc.
+
+The garbage collector has been re-engineered for 1.5 as part of the development +outlined in the design document. +Expected latencies are much lower than with the collector +in prior releases, through a combination of advanced algorithms, +better scheduling of the collector, +and running more of the collection in parallel with the user program. +The "stop the world" phase of the collector +will almost always be under 10 milliseconds and usually much less. +
+ ++For systems that benefit from low latency, such as user-responsive web sites, +the drop in expected latency with the new collector may be important. +
+ ++Details of the new collector were presented in a +talk at GopherCon 2015. +
+ ++In Go 1.5, the order in which goroutines are scheduled has been changed. +The properties of the scheduler were never defined by the language, +but programs that depend on the scheduling order may be broken +by this change. +We have seen a few (erroneous) programs affected by this change. +If you have programs that implicitly depend on the scheduling +order, you will need to update them. +
+ +
+Another potentially breaking change is that the runtime now
+sets the default number of threads to run simultaneously,
+defined by GOMAXPROCS
, to the number
+of cores available on the CPU.
+In prior releases the default was 1.
+Programs that do not expect to run with multiple cores may
+break inadvertently.
+They can be updated by removing the restriction or by setting
+GOMAXPROCS
explicitly.
+For a more detailed discussion of this change, see
+the design document.
+
+Now that the Go compiler and runtime are implemented in Go, a Go compiler
+must be available to compile the distribution from source.
+Thus, to build the Go core, a working Go distribution must already be in place.
+(Go programmers who do not work on the core are unaffected by this change.)
+Any Go 1.4 or later distribution (including gccgo
) will serve.
+For details, see the design document.
+
+Due mostly to the industry's move away from the 32-bit x86 architecture,
+the set of binary downloads provided is reduced in 1.5.
+A distribution for the OS X operating system is provided only for the
+amd64
architecture, not 386
.
+Similarly, the ports for Snow Leopard (Apple OS X 10.6) still work but are no
+longer released as a download or maintained since Apple no longer maintains that version
+of the operating system.
+Also, the dragonfly/386
port is no longer supported at all
+because DragonflyBSD itself no longer supports the 32-bit 386 architecture.
+
+There are however several new ports available to be built from source.
+These include darwin/arm
and darwin/arm64
.
+The new port linux/arm64
is mostly in place, but cgo
+is only supported using external linking.
+
+Also available as experiments are ppc64
+and ppc64le
(64-bit PowerPC, big- and little-endian).
+Both these ports support cgo
but
+only with internal linking.
+
+On FreeBSD, Go 1.5 requires FreeBSD 8-STABLE+ because of its new use of the SYSCALL
instruction.
+
+On NaCl, Go 1.5 requires SDK version pepper-41. Later pepper versions are not +compatible due to the removal of the sRPC subsystem from the NaCl runtime. +
+ +
+On Darwin, the use of the system X.509 certificate interface can be disabled
+with the ios
build tag.
+
+The Solaris port now has full support for cgo and the packages
+net
and
+crypto/x509
,
+as well as a number of other fixes and improvements.
+
+As part of the process to eliminate C from the tree, the compiler and +linker were translated from C to Go. +It was a genuine (machine assisted) translation, so the new programs are essentially +the old programs translated rather than new ones with new bugs. +We are confident the translation process has introduced few if any new bugs, +and in fact uncovered a number of previously unknown bugs, now fixed. +
+ ++The assembler is a new program, however; it is described below. +
+ +
+The suites of programs that were the compilers (6g
, 8g
, etc.),
+the assemblers (6a
, 8a
, etc.),
+and the linkers (6l
, 8l
, etc.)
+have each been consolidated into a single tool that is configured
+by the environment variables GOOS
and GOARCH
.
+The old names are gone; the new tools are available through the go
tool
+mechanism as go tool compile
,
+go tool asm
,
+and go tool link
.
+Also, the file suffixes .6
, .8
, etc. for the
+intermediate object files are also gone; now they are just plain .o
files.
+
+For example, to build and link a program on amd64 for Darwin
+using the tools directly, rather than through go build
,
+one would run:
+
+$ export GOOS=darwin GOARCH=amd64 +$ go tool compile program.go +$ go tool link program.o ++ +
+Because the go/types
package
+has now moved into the main repository (see below),
+the vet
and
+cover
+tools have also been moved.
+They are no longer maintained in the external golang.org/x/tools
repository,
+although (deprecated) source still resides there for compatibility with old releases.
+
+As described above, the compiler in Go 1.5 is a single Go program,
+translated from the old C source, that replaces 6g
, 8g
,
+and so on.
+Its target is configured by the environment variables GOOS
and GOARCH
.
+
+The 1.5 compiler is mostly equivalent to the old,
+but some internal details have changed.
+One significant change is that evaluation of constants now uses
+the math/big
package
+rather than a custom (and less well tested) implementation of high precision
+arithmetic.
+We do not expect this to affect the results.
+
+For the amd64 architecture only, the compiler has a new option, -dynlink
,
+that assists dynamic linking by supporting references to Go symbols
+defined in external shared libraries.
+
+Like the compiler and linker, the assembler in Go 1.5 is a single program
+that replaces the suite of assemblers (6a
,
+8a
, etc.) and the environment variables
+GOARCH
and GOOS
+configure the architecture and operating system.
+Unlike the other programs, the assembler is a wholly new program
+written in Go.
+
+The new assembler is very nearly compatible with the previous +ones, but there are a few changes that may affect some +assembler source files. +See the updated assembler guide +for more specific information about these changes. In summary: + +
+ +
+First, the expression evaluation used for constants is a little
+different.
+It now uses unsigned 64-bit arithmetic and the precedence
+of operators (+
, -
, <<
, etc.)
+comes from Go, not C.
+We expect these changes to affect very few programs but
+manual verification may be required.
+
+Perhaps more important is that on machines where
+SP
or PC
is only an alias
+for a numbered register,
+such as R13
for the stack pointer and
+R15
for the hardware program counter
+on ARM,
+a reference to such a register that does not include a symbol
+is now illegal.
+For example, SP
and 4(SP)
are
+illegal but sym+4(SP)
is fine.
+On such machines, to refer to the hardware register use its
+true R
name.
+
+One minor change is that some of the old assemblers +permitted the notation +
+ ++constant=value ++ +
+to define a named constant.
+Since this is always possible to do with the traditional
+C-like #define
notation, which is still
+supported (the assembler includes an implementation
+of a simplified C preprocessor), the feature was removed.
+
+The linker in Go 1.5 is now one Go program,
+that replaces 6l
, 8l
, etc.
+Its operating system and instruction set are specified
+by the environment variables GOOS
and GOARCH
.
+
+There are several other changes.
+The most significant is the addition of a -buildmode
option that
+expands the style of linking; it now supports
+situations such as building shared libraries and allowing other languages
+to call into Go libraries.
+Some of these were outlined in a design document.
+For a list of the available build modes and their use, run
+
+$ go help buildmode ++ +
+Another minor change is that the linker no longer records build time stamps in +the header of Windows executables. +Also, although this may be fixed, Windows cgo executables are missing some +DWARF information. +
+ +
+Finally, the -X
flag, which takes two arguments,
+as in
+
+-X importpath.name value ++ +
+now also accepts a more common Go flag style with a single argument
+that is itself a name=value
pair:
+
+-X importpath.name=value ++ +
+Although the old syntax still works, it is recommended that uses of this +flag in scripts and the like be updated to the new form. +
+ +
+The go
command's basic operation
+is unchanged, but there are a number of changes worth noting.
+
+The previous release introduced the idea of a directory internal to a package
+being unimportable through the go
command.
+In 1.4, it was tested with the introduction of some internal elements
+in the core repository.
+As suggested in the design document,
+that change is now being made available to all repositories.
+The rules are explained in the design document, but in summary any
+package in or under a directory named internal
may
+be imported by packages rooted in the same subtree.
+Existing packages with directory elements named internal
may be
+inadvertently broken by this change, which was why it was advertised
+in the last release.
+
+Another change in how packages are handled is the experimental
+addition of support for "vendoring".
+For details, see the documentation for the go
command
+and the design document.
+
+There have also been several minor changes. +Read the documentation for full details. +
+ +.swig
and .swigcxx
+now require SWIG 3.0.6 or later.
+install
subcommand now removes the
+binary created by the build
subcommand
+in the source directory, if present,
+to avoid problems having two binaries present in the tree.
+std
(standard library) wildcard package name
+now excludes commands.
+A new cmd
wildcard covers the commands.
+-asmflags
build option
+sets flags to pass to the assembler.
+However,
+the -ccflags
build option has been dropped;
+it was specific to the old, now deleted C compiler .
+-buildmode
build option
+sets the build mode, described above.
+-pkgdir
build option
+sets the location of installed package archives,
+to help isolate custom builds.
+-toolexec
build option
+allows substitution of a different command to invoke
+the compiler and so on.
+This acts as a custom replacement for go tool
.
+test
subcommand now has a -count
+flag to specify how many times to run each test and benchmark.
+The testing
package
+does the work here, through the -test.count
flag.
+generate
subcommand has a couple of new features.
+The -run
option specifies a regular expression to select which directives
+to execute; this was proposed but never implemented in 1.4.
+The executing pattern now has access to two new environment variables:
+$GOLINE
returns the source line number of the directive
+and $DOLLAR
expands to a dollar sign.
+get
subcommand now has a -insecure
+flag that must be enabled if fetching from an insecure repository, one that
+does not encrypt the connection.
+
+The go tool vet
command now does
+more thorough validation of struct tags.
+
+A new tool is available for dynamic execution tracing of Go programs.
+The usage is analogous to how the test coverage tool works.
+Generation of traces is integrated into go test
,
+and then a separate execution of the tracing tool itself analyzes the results:
+
+$ go test -trace=trace.out path/to/package +$ go tool trace [flags] pkg.test trace.out ++ +
+The flags enable the output to be displayed in a browser window.
+For details, run go tool trace -help
.
+There is also a description of the tracing facility in this
+talk
+from GopherCon 2015.
+
+A few releases back, the go doc
+command was deleted as being unnecessary.
+One could always run "godoc .
" instead.
+The 1.5 release introduces a new go doc
+command with a more convenient command-line interface than
+godoc
's.
+It is designed for command-line usage specifically, and provides a more
+compact and focused presentation of the documentation for a package
+or its elements, according to the invocation.
+It also provides case-insensitive matching and
+support for showing the documentation for unexported symbols.
+For details run "go help doc
".
+
+When parsing #cgo
lines,
+the invocation ${SRCDIR}
is now
+expanded into the path to the source directory.
+This allows options to be passed to the
+compiler and linker that involve file paths relative to the
+source code directory. Without the expansion the paths would be
+invalid when the current working directory changes.
+
+Solaris now has full cgo support. +
+ ++On Windows, cgo now uses external linking by default. +
+ ++When a C struct ends with a zero-sized field, but the struct itself is +not zero-sized, Go code can no longer refer to the zero-sized field. +Any such references will have to be rewritten. +
+ ++As always, the changes are so general and varied that precise statements +about performance are difficult to make. +The changes are even broader ranging than usual in this release, which +includes a new garbage collector and a conversion of the runtime to Go. +Some programs may run faster, some slower. +On average the programs in the Go 1 benchmark suite run a few percent faster in Go 1.5 +than they did in Go 1.4, +while as mentioned above the garbage collector's pauses are +dramatically shorter, and almost always under 10 milliseconds. +
+ ++Builds in Go 1.5 will be slower by a factor of about two. +The automatic translation of the compiler and linker from C to Go resulted in +unidiomatic Go code that performs poorly compared to well-written Go. +Analysis tools and refactoring helped to improve the code, but much remains to be done. +Further profiling and optimization will continue in Go 1.6 and future releases. +For more details, see these slides +and associated video. +
+ +
+The flag package's
+PrintDefaults
+function, and method on FlagSet
,
+have been modified to create nicer usage messages.
+The format has been changed to be more human-friendly and in the usage
+messages a word quoted with `backquotes` is taken to be the name of the
+flag's operand to display in the usage message.
+For instance, a flag created with the invocation,
+
+cpuFlag = flag.Int("cpu", 1, "run `N` processes in parallel") ++ +
+will show the help message, +
+ ++-cpu N + run N processes in parallel (default 1) ++ +
+Also, the default is now listed only when it is not the zero value for the type. +
+ +
+The math/big
package
+has a new, fundamental data type,
+Float
,
+which implements arbitrary-precision floating-point numbers.
+A Float
value is represented by a boolean sign,
+a variable-length mantissa, and a 32-bit fixed-size signed exponent.
+The precision of a Float
(the mantissa size in bits)
+can be specified explicitly or is otherwise determined by the first
+operation that creates the value.
+Once created, the size of a Float
's mantissa may be modified with the
+SetPrec
method.
+Floats
support the concept of infinities, such as are created by
+overflow, but values that would lead to the equivalent of IEEE 754 NaNs
+trigger a panic.
+Float
operations support all IEEE-754 rounding modes.
+When the precision is set to 24 (53) bits,
+operations that stay within the range of normalized float32
+(float64
)
+values produce the same results as the corresponding IEEE-754
+arithmetic on those values.
+
+The go/types
package
+up to now has been maintained in the golang.org/x
+repository; as of Go 1.5 it has been relocated to the main repository.
+The code at the old location is now deprecated.
+There is also a modest API change in the package, discussed below.
+
+Associated with this move, the
+go/constant
+package also moved to the main repository;
+it was golang.org/x/tools/exact
before.
+The go/importer
package
+also moved to the main repository,
+as well as some tools described above.
+
+The DNS resolver in the net package has almost always used cgo
to access
+the system interface.
+A change in Go 1.5 means that on most Unix systems DNS resolution
+will no longer require cgo
, which simplifies execution
+on those platforms.
+Now, if the system's networking configuration permits, the native Go resolver
+will suffice.
+The important effect of this change is that each DNS resolution occupies a goroutine
+rather than a thread,
+so a program with multiple outstanding DNS requests will consume fewer operating
+system resources.
+
+The decision of how to run the resolver applies at run time, not build time.
+The netgo
build tag that has been used to enforce the use
+of the Go resolver is no longer necessary, although it still works.
+A new netcgo
build tag forces the use of the cgo
resolver at
+build time.
+To force cgo
resolution at run time set
+GODEBUG=netdns=cgo
in the environment.
+More debug options are documented here.
+
+This change applies to Unix systems only. +Windows, Mac OS X, and Plan 9 systems behave as before. +
+ +
+The reflect
package
+has two new functions: ArrayOf
+and FuncOf
.
+These functions, analogous to the extant
+SliceOf
function,
+create new types at runtime to describe arrays and functions.
+
+Several dozen bugs were found in the standard library
+through randomized testing with the
+go-fuzz
tool.
+Bugs were fixed in the
+archive/tar
,
+archive/zip
,
+compress/flate
,
+encoding/gob
,
+fmt
,
+html/template
,
+image/gif
,
+image/jpeg
,
+image/png
, and
+text/template
,
+packages.
+The fixes harden the implementation against incorrect and malicious inputs.
+
archive/zip
package's
+Writer
type now has a
+SetOffset
+method to specify the location within the output stream at which to write the archive.
+Reader
in the
+bufio
package now has a
+Discard
+method to discard data from the input.
+bytes
package,
+the Buffer
type
+now has a Cap
method
+that reports the number of bytes allocated within the buffer.
+Similarly, in both the bytes
+and strings
packages,
+the Reader
+type now has a Size
+method that reports the original length of the underlying slice or string.
+bytes
and
+strings
packages
+also now have a LastIndexByte
+function that locates the rightmost byte with that value in the argument.
+crypto
package
+has a new interface, Decrypter
,
+that abstracts the behavior of a private key used in asymmetric decryption.
+crypto/cipher
package,
+the documentation for the Stream
+interface has been clarified regarding the behavior when the source and destination are
+different lengths.
+If the destination is shorter than the source, the method will panic.
+This is not a change in the implementation, only the documentation.
+crypto/cipher
package,
+there is now support for nonce lengths other than 96 bytes in AES's Galois/Counter mode (GCM),
+which some protocols require.
+crypto/elliptic
package,
+there is now a Name
field in the
+CurveParams
struct,
+and the curves implemented in the package have been given names.
+These names provide a safer way to select a curve, as opposed to
+selecting its bit size, for cryptographic systems that are curve-dependent.
+crypto/elliptic
package,
+the Unmarshal
function
+now verifies that the point is actually on the curve.
+(If it is not, the function returns nils).
+This change guards against certain attacks.
+crypto/sha512
+package now has support for the two truncated versions of
+the SHA-512 hash algorithm, SHA-512/224 and SHA-512/256.
+crypto/tls
package
+minimum protocol version now defaults to TLS 1.0.
+The old default, SSLv3, is still available through Config
if needed.
+crypto/tls
package
+now supports Signed Certificate Timestamps (SCTs) as specified in RFC 6962.
+The server serves them if they are listed in the
+Certificate
struct,
+and the client requests them and exposes them, if present,
+in its ConnectionState
struct.
+
+crypto/tls
client connection,
+previously only available via the
+OCSPResponse
method,
+is now exposed in the ConnectionState
struct.
+crypto/tls
server implementation
+will now always call the
+GetCertificate
function in
+the Config
struct
+to select a certificate for the connection when none is supplied.
+crypto/tls
package
+can now be changed while the server is running.
+This is done through the new
+SetSessionTicketKeys
+method of the
+Config
type.
+crypto/x509
package,
+wildcards are now accepted only in the leftmost label as defined in
+the specification.
+crypto/x509
package,
+the handling of unknown critical extensions has been changed.
+They used to cause parse errors but now they are parsed and caused errors only
+in Verify
.
+The new field UnhandledCriticalExtensions
of
+Certificate
records these extensions.
+DB
type of the
+database/sql
package
+now has a Stats
method
+to retrieve database statistics.
+debug/dwarf
+package has extensive additions to better support DWARF version 4.
+See for example the definition of the new type
+Class
.
+debug/dwarf
package
+also now supports decoding of DWARF line tables.
+debug/elf
+package now has support for the 64-bit PowerPC architecture.
+encoding/base64
package
+now supports unpadded encodings through two new encoding variables,
+RawStdEncoding
and
+RawURLEncoding
.
+encoding/json
package
+now returns an UnmarshalTypeError
+if a JSON value is not appropriate for the target variable or component
+to which it is being unmarshaled.
+encoding/json
's
+Decoder
+type has a new method that provides a streaming interface for decoding
+a JSON document:
+Token
.
+It also interoperates with the existing functionality of Decode
,
+which will continue a decode operation already started with Decoder.Token
.
+flag
package
+has a new function, UnquoteUsage
,
+to assist in the creation of usage messages using the new convention
+described above.
+fmt
package,
+a value of type Value
now
+prints what it holds, rather than use the reflect.Value
's Stringer
+method, which produces things like <int Value>
.
+EmptyStmt
type
+in the go/ast
package now
+has a boolean Implicit
field that records whether the
+semicolon was implicitly added or was present in the source.
+go/build
package
+reserves GOARCH
values for a number of architectures that Go might support one day.
+This is not a promise that it will.
+Also, the Package
struct
+now has a PkgTargetRoot
field that stores the
+architecture-dependent root directory in which to install, if known.
+go/types
+package allows one to control the prefix attached to package-level names using
+the new Qualifier
+function type as an argument to several functions. This is an API change for
+the package, but since it is new to the core, it is not breaking the Go 1 compatibility
+rules since code that uses the package must explicitly ask for it at its new location.
+To update, run
+go fix
on your package.
+image
package,
+the Rectangle
type
+now implements the Image
interface,
+so a Rectangle
can serve as a mask when drawing.
+image
package,
+to assist in the handling of some JPEG images,
+there is now support for 4:1:1 and 4:1:0 YCbCr subsampling and basic
+CMYK support, represented by the new image.CMYK
struct.
+image/color
package
+adds basic CMYK support, through the new
+CMYK
struct,
+the CMYKModel
color model, and the
+CMYKToRGB
function, as
+needed by some JPEG images.
+image/color
package,
+the conversion of a YCbCr
+value to RGBA
has become more precise.
+Previously, the low 8 bits were just an echo of the high 8 bits;
+now they contain more accurate information.
+Because of the echo property of the old code, the operation
+uint8(r)
to extract an 8-bit red value worked, but is incorrect.
+In Go 1.5, that operation may yield a different value.
+The correct code is, and always was, to select the high 8 bits:
+uint8(r>>8)
.
+Incidentally, the image/draw
package
+provides better support for such conversions; see
+this blog post
+for more information.
+Index
+now honors the alpha channel.
+image/gif
package
+includes a couple of generalizations.
+A multiple-frame GIF file can now have an overall bounds different
+from all the contained single frames' bounds.
+Also, the GIF
struct
+now has a Disposal
field
+that specifies the disposal method for each frame.
+io
package
+adds a CopyBuffer
function
+that is like Copy
but
+uses a caller-provided buffer, permitting control of allocation and buffer size.
+log
package
+has a new LUTC
flag
+that causes time stamps to be printed in the UTC time zone.
+It also adds a SetOutput
method
+for user-created loggers.
+Max
was not detecting all possible NaN bit patterns.
+This is fixed in Go 1.5, so programs that use math.Max
on data including NaNs may behave differently,
+but now correctly according to the IEEE754 definition of NaNs.
+math/big
package
+adds a new Jacobi
+function for integers and a new
+ModSqrt
+method for the Int
type.
+WordDecoder
type
+to decode MIME headers containing RFC 204-encoded words.
+It also provides BEncoding
and
+QEncoding
+as implementations of the encoding schemes of RFC 2045 and RFC 2047.
+mime
package also adds an
+ExtensionsByType
+function that returns the MIME extensions know to be associated with a given MIME type.
+mime/quotedprintable
+package that implements the quoted-printable encoding defined by RFC 2045.
+net
package will now
+Dial
hostnames by trying each
+IP address in order until one succeeds.
+The Dialer.DualStack
+mode now implements Happy Eyeballs
+(RFC 6555) by giving the
+first address family a 300ms head start; this value can be overridden by
+the new Dialer.FallbackDelay
.
+net
package have been
+tidied up.
+Most now return an
+OpError
value
+with more information than before.
+Also, the OpError
+type now includes a Source
field that holds the local
+network address.
+net/http
package now
+has support for setting trailers from a server Handler
.
+For details, see the documentation for
+ResponseWriter
.
+net/http
+Request
by setting the new
+Request.Cancel
+field.
+It is supported by http.Transport
.
+The Cancel
field's type is compatible with the
+context.Context.Done
+return value.
+net/http
package,
+there is code to ignore the zero Time
value
+in the ServeContent
function.
+As of Go 1.5, it now also ignores a time value equal to the Unix epoch.
+net/http/fcgi
package
+exports two new errors,
+ErrConnClosed
and
+ErrRequestAborted
,
+to report the corresponding error conditions.
+net/http/cgi
package
+had a bug that mishandled the values of the environment variables
+REMOTE_ADDR
and REMOTE_HOST
.
+This has been fixed.
+Also, starting with Go 1.5 the package sets the REMOTE_PORT
+variable.
+net/mail
package
+adds an AddressParser
+type that can parse mail addresses.
+net/smtp
package
+now has a TLSConnectionState
+accessor to the Client
+type that returns the client's TLS state.
+os
package
+has a new LookupEnv
function
+that is similar to Getenv
+but can distinguish between an empty environment variable and a missing one.
+os/signal
package
+adds new Ignore
and
+Reset
functions.
+runtime
,
+runtime/trace
,
+and net/http/pprof
packages
+each have new functions to support the tracing facilities described above:
+ReadTrace
,
+StartTrace
,
+StopTrace
,
+Start
,
+Stop
, and
+Trace
.
+See the respective documentation for details.
+runtime/pprof
package
+by default now includes overall memory statistics in all memory profiles.
+strings
package
+has a new Compare
function.
+This is present to provide symmetry with the bytes
package
+but is otherwise unnecessary as strings support comparison natively.
+WaitGroup
implementation in
+package sync
+now diagnoses code that races a call to Add
+against a return from Wait
.
+If it detects this condition, the implementation panics.
+syscall
package,
+the Linux SysProcAttr
struct now has a
+GidMappingsEnableSetgroups
field, made necessary
+by security changes in Linux 3.19.
+On all Unix systems, the struct also has new Foreground
and Pgid
fields
+to provide more control when exec'ing.
+On Darwin, there is now a Syscall9
function
+to support calls with too many arguments.
+testing/quick
will now
+generate nil
values for pointer types,
+making it possible to use with recursive data structures.
+Also, the package now supports generation of array types.
+text/template
and
+html/template
packages,
+integer constants too large to be represented as a Go integer now trigger a
+parse error. Before, they were silently converted to floating point, losing
+precision.
+text/template
and
+html/template
packages,
+a new Option
method
+allows customization of the behavior of the template during execution.
+The sole implemented option allows control over how a missing key is
+handled when indexing a map.
+The default, which can now be overridden, is as before: to continue with an invalid value.
+time
package's
+Time
type has a new method
+AppendFormat
,
+which can be used to avoid allocation when printing a time value.
+unicode
package and associated
+support throughout the system has been upgraded from version 7.0 to
+Unicode 8.0.
++The latest Go release, version 1.6, arrives six months after 1.5. +Most of its changes are in the implementation of the language, runtime, and libraries. +There are no changes to the language specification. +As always, the release maintains the Go 1 promise of compatibility. +We expect almost all Go programs to continue to compile and run as before. +
+ ++The release adds new ports to Linux on 64-bit MIPS and Android on 32-bit x86; +defined and enforced rules for sharing Go pointers with C; +transparent, automatic support for HTTP/2; +and a new mechanism for template reuse. +
+ ++There are no language changes in this release. +
+ +
+Go 1.6 adds experimental ports to
+Linux on 64-bit MIPS (linux/mips64
and linux/mips64le
).
+These ports support cgo
but only with internal linking.
+
+Go 1.6 also adds an experimental port to Android on 32-bit x86 (android/386
).
+
+On FreeBSD, Go 1.6 defaults to using clang
, not gcc
, as the external C compiler.
+
+On Linux on little-endian 64-bit PowerPC (linux/ppc64le
),
+Go 1.6 now supports cgo
with external linking and
+is roughly feature complete.
+
+On NaCl, Go 1.5 required SDK version pepper-41. +Go 1.6 adds support for later SDK versions. +
+ +
+On 32-bit x86 systems using the -dynlink
or -shared
compilation modes,
+the register CX is now overwritten by certain memory references and should
+be avoided in hand-written assembly.
+See the assembly documentation for details.
+
+There is one major change to cgo
, along with one minor change.
+
+The major change is the definition of rules for sharing Go pointers with C code,
+to ensure that such C code can coexist with Go's garbage collector.
+Briefly, Go and C may share memory allocated by Go
+when a pointer to that memory is passed to C as part of a cgo
call,
+provided that the memory itself contains no pointers to Go-allocated memory,
+and provided that C does not retain the pointer after the call returns.
+These rules are checked by the runtime during program execution:
+if the runtime detects a violation, it prints a diagnosis and crashes the program.
+The checks can be disabled by setting the environment variable
+GODEBUG=cgocheck=0
, but note that the vast majority of
+code identified by the checks is subtly incompatible with garbage collection
+in one way or another.
+Disabling the checks will typically only lead to more mysterious failure modes.
+Fixing the code in question should be strongly preferred
+over turning off the checks.
+See the cgo
documentation for more details.
+
+The minor change is
+the addition of explicit C.complexfloat
and C.complexdouble
types,
+separate from Go's complex64
and complex128
.
+Matching the other numeric types, C's complex types and Go's complex type are
+no longer interchangeable.
+
+The compiler toolchain is mostly unchanged. +Internally, the most significant change is that the parser is now hand-written +instead of generated from yacc. +
+ +
+The compiler, linker, and go
command have a new flag -msan
,
+analogous to -race
and only available on linux/amd64,
+that enables interoperation with the Clang MemorySanitizer.
+Such interoperation is useful mainly for testing a program containing suspect C or C++ code.
+
+The linker has a new option -libgcc
to set the expected location
+of the C compiler support library when linking cgo
code.
+The option is only consulted when using -linkmode=internal
,
+and it may be set to none
to disable the use of a support library.
+
+The implementation of build modes started in Go 1.5 has been expanded to more systems.
+This release adds support for the c-shared
mode on android/386
, android/amd64
,
+android/arm64
, linux/386
, and linux/arm64
;
+for the shared
mode on linux/386
, linux/arm
, linux/amd64
, and linux/ppc64le
;
+and for the new pie
mode (generating position-independent executables) on
+android/386
, android/amd64
, android/arm
, android/arm64
, linux/386
,
+linux/amd64
, linux/arm
, linux/arm64
, and linux/ppc64le
.
+See the design document for details.
+
+As a reminder, the linker's -X
flag changed in Go 1.5.
+In Go 1.4 and earlier, it took two arguments, as in
+
+-X importpath.name value ++ +
+Go 1.5 added an alternative syntax using a single argument
+that is itself a name=value
pair:
+
+-X importpath.name=value ++ +
+In Go 1.5 the old syntax was still accepted, after printing a warning +suggesting use of the new syntax instead. +Go 1.6 continues to accept the old syntax and print the warning. +Go 1.7 will remove support for the old syntax. +
+ ++The release schedules for the GCC and Go projects do not coincide. +GCC release 5 contains the Go 1.4 version of gccgo. +The next release, GCC 6, will have the Go 1.6.1 version of gccgo. +
+ +
+The go
command's basic operation
+is unchanged, but there are a number of changes worth noting.
+
+Go 1.5 introduced experimental support for vendoring,
+enabled by setting the GO15VENDOREXPERIMENT
environment variable to 1
.
+Go 1.6 keeps the vendoring support, no longer considered experimental,
+and enables it by default.
+It can be disabled explicitly by setting
+the GO15VENDOREXPERIMENT
environment variable to 0
.
+Go 1.7 will remove support for the environment variable.
+
+The most likely problem caused by enabling vendoring by default happens
+in source trees containing an existing directory named vendor
that
+does not expect to be interpreted according to new vendoring semantics.
+In this case, the simplest fix is to rename the directory to anything other
+than vendor
and update any affected import paths.
+
+For details about vendoring,
+see the documentation for the go
command
+and the design document.
+
+There is a new build flag, -msan
,
+that compiles Go with support for the LLVM memory sanitizer.
+This is intended mainly for use when linking against C or C++ code
+that is being checked with the memory sanitizer.
+
+Go 1.5 introduced the
+go doc
command,
+which allows references to packages using only the package name, as in
+go
doc
http
.
+In the event of ambiguity, the Go 1.5 behavior was to use the package
+with the lexicographically earliest import path.
+In Go 1.6, ambiguity is resolved by preferring import paths with
+fewer elements, breaking ties using lexicographic comparison.
+An important effect of this change is that original copies of packages
+are now preferred over vendored copies.
+Successful searches also tend to run faster.
+
+The go vet
command now diagnoses
+passing function or method values as arguments to Printf
,
+such as when passing f
where f()
was intended.
+
+As always, the changes are so general and varied that precise statements +about performance are difficult to make. +Some programs may run faster, some slower. +On average the programs in the Go 1 benchmark suite run a few percent faster in Go 1.6 +than they did in Go 1.5. +The garbage collector's pauses are even lower than in Go 1.5, +especially for programs using +a large amount of memory. +
+ +
+There have been significant optimizations bringing more than 10% improvements
+to implementations of the
+compress/bzip2
,
+compress/gzip
,
+crypto/aes
,
+crypto/elliptic
,
+crypto/ecdsa
, and
+sort
packages.
+
+Go 1.6 adds transparent support in the
+net/http
package
+for the new HTTP/2 protocol.
+Go clients and servers will automatically use HTTP/2 as appropriate when using HTTPS.
+There is no exported API specific to details of the HTTP/2 protocol handling,
+just as there is no exported API specific to HTTP/1.1.
+
+Programs that must disable HTTP/2 can do so by setting
+Transport.TLSNextProto
(for clients)
+or
+Server.TLSNextProto
(for servers)
+to a non-nil, empty map.
+
+Programs that must adjust HTTP/2 protocol-specific details can import and use
+golang.org/x/net/http2
,
+in particular its
+ConfigureServer
+and
+ConfigureTransport
+functions.
+
+The runtime has added lightweight, best-effort detection of concurrent misuse of maps. +As always, if one goroutine is writing to a map, no other goroutine should be +reading or writing the map concurrently. +If the runtime detects this condition, it prints a diagnosis and crashes the program. +The best way to find out more about the problem is to run the program +under the +race detector, +which will more reliably identify the race +and give more detail. +
+ +
+For program-ending panics, the runtime now by default
+prints only the stack of the running goroutine,
+not all existing goroutines.
+Usually only the current goroutine is relevant to a panic,
+so omitting the others significantly reduces irrelevant output
+in a crash message.
+To see the stacks from all goroutines in crash messages, set the environment variable
+GOTRACEBACK
to all
+or call
+debug.SetTraceback
+before the crash, and rerun the program.
+See the runtime documentation for details.
+
+Updating:
+Uncaught panics intended to dump the state of the entire program,
+such as when a timeout is detected or when explicitly handling a received signal,
+should now call debug.SetTraceback("all")
before panicking.
+Searching for uses of
+signal.Notify
may help identify such code.
+
+On Windows, Go programs in Go 1.5 and earlier forced
+the global Windows timer resolution to 1ms at startup
+by calling timeBeginPeriod(1)
.
+Go no longer needs this for good scheduler performance,
+and changing the global timer resolution caused problems on some systems,
+so the call has been removed.
+
+When using -buildmode=c-archive
or
+-buildmode=c-shared
to build an archive or a shared
+library, the handling of signals has changed.
+In Go 1.5 the archive or shared library would install a signal handler
+for most signals.
+In Go 1.6 it will only install a signal handler for the
+synchronous signals needed to handle run-time panics in Go code:
+SIGBUS, SIGFPE, SIGSEGV.
+See the os/signal package for more
+details.
+
+The
+reflect
package has
+resolved a long-standing incompatibility
+between the gc and gccgo toolchains
+regarding embedded unexported struct types containing exported fields.
+Code that walks data structures using reflection, especially to implement
+serialization in the spirit
+of the
+encoding/json
and
+encoding/xml
packages,
+may need to be updated.
+
+The problem arises when using reflection to walk through
+an embedded unexported struct-typed field
+into an exported field of that struct.
+In this case, reflect
had incorrectly reported
+the embedded field as exported, by returning an empty Field.PkgPath
.
+Now it correctly reports the field as unexported
+but ignores that fact when evaluating access to exported fields
+contained within the struct.
+
+Updating: +Typically, code that previously walked over structs and used +
+ ++f.PkgPath != "" ++ +
+to exclude inaccessible fields +should now use +
+ ++f.PkgPath != "" && !f.Anonymous ++ +
+For example, see the changes to the implementations of
+encoding/json
and
+encoding/xml
.
+
+In the
+sort
+package,
+the implementation of
+Sort
+has been rewritten to make about 10% fewer calls to the
+Interface
's
+Less
and Swap
+methods, with a corresponding overall time savings.
+The new algorithm does choose a different ordering than before
+for values that compare equal (those pairs for which Less(i,
j)
and Less(j,
i)
are false).
+
+Updating:
+The definition of Sort
makes no guarantee about the final order of equal values,
+but the new behavior may still break programs that expect a specific order.
+Such programs should either refine their Less
implementations
+to report the desired order
+or should switch to
+Stable
,
+which preserves the original input order
+of equal values.
+
+In the +text/template package, +there are two significant new features to make writing templates easier. +
+ ++First, it is now possible to trim spaces around template actions, +which can make template definitions more readable. +A minus sign at the beginning of an action says to trim space before the action, +and a minus sign at the end of an action says to trim space after the action. +For example, the template +
+ ++{{"{{"}}23 -}} + < +{{"{{"}}- 45}} ++ +
+formats as 23<45
.
+
+Second, the new {{"{{"}}block}}
action,
+combined with allowing redefinition of named templates,
+provides a simple way to define pieces of a template that
+can be replaced in different instantiations.
+There is an example
+in the text/template
package that demonstrates this new feature.
+
archive/tar
package's
+implementation corrects many bugs in rare corner cases of the file format.
+One visible change is that the
+Reader
type's
+Read
method
+now presents the content of special file types as being empty,
+returning io.EOF
immediately.
+archive/zip
package, the
+Reader
type now has a
+RegisterDecompressor
method,
+and the
+Writer
type now has a
+RegisterCompressor
method,
+enabling control over compression options for individual zip files.
+These take precedence over the pre-existing global
+RegisterDecompressor
and
+RegisterCompressor
functions.
+bufio
package's
+Scanner
type now has a
+Buffer
method,
+to specify an initial buffer and maximum buffer size to use during scanning.
+This makes it possible, when needed, to scan tokens larger than
+MaxScanTokenSize
.
+Also for the Scanner
, the package now defines the
+ErrFinalToken
error value, for use by
+split functions to abort processing or to return a final empty token.
+compress/flate
package
+has deprecated its
+ReadError
and
+WriteError
error implementations.
+In Go 1.5 they were only rarely returned when an error was encountered;
+now they are never returned, although they remain defined for compatibility.
+compress/flate
,
+compress/gzip
, and
+compress/zlib
packages
+now report
+io.ErrUnexpectedEOF
for truncated input streams, instead of
+io.EOF
.
+crypto/cipher
package now
+overwrites the destination buffer in the event of a GCM decryption failure.
+This is to allow the AESNI code to avoid using a temporary buffer.
+crypto/tls
package
+has a variety of minor changes.
+It now allows
+Listen
+to succeed when the
+Config
+has a nil Certificates
, as long as the GetCertificate
callback is set,
+it adds support for RSA with AES-GCM cipher suites,
+and
+it adds a
+RecordHeaderError
+to allow clients (in particular, the net/http
package)
+to report a better error when attempting a TLS connection to a non-TLS server.
+crypto/x509
package
+now permits certificates to contain negative serial numbers
+(technically an error, but unfortunately common in practice),
+and it defines a new
+InsecureAlgorithmError
+to give a better error message when rejecting a certificate
+signed with an insecure algorithm like MD5.
+debug/dwarf
and
+debug/elf
packages
+together add support for compressed DWARF sections.
+User code needs no updating: the sections are decompressed automatically when read.
+debug/elf
package
+adds support for general compressed ELF sections.
+User code needs no updating: the sections are decompressed automatically when read.
+However, compressed
+Sections
do not support random access:
+they have a nil ReaderAt
field.
+encoding/asn1
package
+now exports
+tag and class constants
+useful for advanced parsing of ASN.1 structures.
+encoding/asn1
package,
+Unmarshal
now rejects various non-standard integer and length encodings.
+encoding/base64
package's
+Decoder
has been fixed
+to process the final bytes of its input. Previously it processed as many four-byte tokens as
+possible but ignored the remainder, up to three bytes.
+The Decoder
therefore now handles inputs in unpadded encodings (like
+RawURLEncoding) correctly,
+but it also rejects inputs in padded encodings that are truncated or end with invalid bytes,
+such as trailing spaces.
+encoding/json
package
+now checks the syntax of a
+Number
+before marshaling it, requiring that it conforms to the JSON specification for numeric values.
+As in previous releases, the zero Number
(an empty string) is marshaled as a literal 0 (zero).
+encoding/xml
package's
+Marshal
+function now supports a cdata
attribute, such as chardata
+but encoding its argument in one or more <![CDATA[ ... ]]>
tags.
+encoding/xml
package,
+Decoder
's
+Token
method
+now reports an error when encountering EOF before seeing all open tags closed,
+consistent with its general requirement that tags in the input be properly matched.
+To avoid that requirement, use
+RawToken
.
+fmt
package now allows
+any integer type as an argument to
+Printf
's *
width and precision specification.
+In previous releases, the argument to *
was required to have type int
.
+fmt
package,
+Scanf
can now scan hexadecimal strings using %X, as an alias for %x.
+Both formats accept any mix of upper- and lower-case hexadecimal.
+image
+and
+image/color
packages
+add
+NYCbCrA
+and
+NYCbCrA
+types, to support Y'CbCr images with non-premultiplied alpha.
+io
package's
+MultiWriter
+implementation now implements a WriteString
method,
+for use by
+WriteString
.
+math/big
package,
+Int
adds
+Append
+and
+Text
+methods to give more control over printing.
+math/big
package,
+Float
now implements
+encoding.TextMarshaler
and
+encoding.TextUnmarshaler
,
+allowing it to be serialized in a natural form by the
+encoding/json
and
+encoding/xml
packages.
+math/big
package,
+Float
's
+Append
method now supports the special precision argument -1.
+As in
+strconv.ParseFloat
,
+precision -1 means to use the smallest number of digits necessary such that
+Parse
+reading the result into a Float
of the same precision
+will yield the original value.
+math/rand
package
+adds a
+Read
+function, and likewise
+Rand
adds a
+Read
method.
+These make it easier to generate pseudorandom test data.
+Note that, like the rest of the package,
+these should not be used in cryptographic settings;
+for such purposes, use the crypto/rand
package instead.
+net
package's
+ParseMAC
function now accepts 20-byte IP-over-InfiniBand (IPoIB) link-layer addresses.
+net
package,
+there have been a few changes to DNS lookups.
+First, the
+DNSError
error implementation now implements
+Error
,
+and in particular its new
+IsTemporary
+method returns true for DNS server errors.
+Second, DNS lookup functions such as
+LookupAddr
+now return rooted domain names (with a trailing dot)
+on Plan 9 and Windows, to match the behavior of Go on Unix systems.
+net/http
package has
+a number of minor additions beyond the HTTP/2 support already discussed.
+First, the
+FileServer
now sorts its generated directory listings by file name.
+Second, the
+ServeFile
function now refuses to serve a result
+if the request's URL path contains “..” (dot-dot) as a path element.
+Programs should typically use FileServer
and
+Dir
+instead of calling ServeFile
directly.
+Programs that need to serve file content in response to requests for URLs containing dot-dot can
+still call ServeContent
.
+Third, the
+Client
now allows user code to set the
+Expect:
100-continue
header (see
+Transport.ExpectContinueTimeout
).
+Fourth, there are
+five new error codes:
+StatusPreconditionRequired
(428),
+StatusTooManyRequests
(429),
+StatusRequestHeaderFieldsTooLarge
(431), and
+StatusNetworkAuthenticationRequired
(511) from RFC 6585,
+as well as the recently-approved
+StatusUnavailableForLegalReasons
(451).
+Fifth, the implementation and documentation of
+CloseNotifier
+has been substantially changed.
+The Hijacker
+interface now works correctly on connections that have previously
+been used with CloseNotifier
.
+The documentation now describes when CloseNotifier
+is expected to work.
+net/http
package,
+there are a few changes related to the handling of a
+Request
data structure with its Method
field set to the empty string.
+An empty Method
field has always been documented as an alias for "GET"
+and it remains so.
+However, Go 1.6 fixes a few routines that did not treat an empty
+Method
the same as an explicit "GET"
.
+Most notably, in previous releases
+Client
followed redirects only with
+Method
set explicitly to "GET"
;
+in Go 1.6 Client
also follows redirects for the empty Method
.
+Finally,
+NewRequest
accepts a method
argument that has not been
+documented as allowed to be empty.
+In past releases, passing an empty method
argument resulted
+in a Request
with an empty Method
field.
+In Go 1.6, the resulting Request
always has an initialized
+Method
field: if its argument is an empty string, NewRequest
+sets the Method
field in the returned Request
to "GET"
.
+net/http/httptest
package's
+ResponseRecorder
now initializes a default Content-Type header
+using the same content-sniffing algorithm as in
+http.Server
.
+net/url
package's
+Parse
is now stricter and more spec-compliant regarding the parsing
+of host names.
+For example, spaces in the host name are no longer accepted.
+net/url
package,
+the Error
type now implements
+net.Error
.
+os
package's
+IsExist
,
+IsNotExist
,
+and
+IsPermission
+now return correct results when inquiring about an
+SyscallError
.
+os.Stdout
+or os.Stderr
(more precisely, an os.File
+opened for file descriptor 1 or 2) fails due to a broken pipe error,
+the program will raise a SIGPIPE
signal.
+By default this will cause the program to exit; this may be changed by
+calling the
+os/signal
+Notify
function
+for syscall.SIGPIPE
.
+A write to a broken pipe on a file descriptor other 1 or 2 will simply
+return syscall.EPIPE
(possibly wrapped in
+os.PathError
+and/or os.SyscallError
)
+to the caller.
+The old behavior of raising an uncatchable SIGPIPE
signal
+after 10 consecutive writes to a broken pipe no longer occurs.
+os/exec
package,
+Cmd
's
+Output
method continues to return an
+ExitError
when a command exits with an unsuccessful status.
+If standard error would otherwise have been discarded,
+the returned ExitError
now holds a prefix and suffix
+(currently 32 kB) of the failed command's standard error output,
+for debugging or for inclusion in error messages.
+The ExitError
's
+String
+method does not show the captured standard error;
+programs must retrieve it from the data structure
+separately.
+path/filepath
package's
+Join
function now correctly handles the case when the base is a relative drive path.
+For example, Join(`c:`,
`a`)
now
+returns `c:a`
instead of `c:\a`
as in past releases.
+This may affect code that expects the incorrect result.
+regexp
package,
+the
+Regexp
type has always been safe for use by
+concurrent goroutines.
+It uses a sync.Mutex
to protect
+a cache of scratch spaces used during regular expression searches.
+Some high-concurrency servers using the same Regexp
from many goroutines
+have seen degraded performance due to contention on that mutex.
+To help such servers, Regexp
now has a
+Copy
method,
+which makes a copy of a Regexp
that shares most of the structure
+of the original but has its own scratch space cache.
+Two goroutines can use different copies of a Regexp
+without mutex contention.
+A copy does have additional space overhead, so Copy
+should only be used when contention has been observed.
+strconv
package adds
+IsGraphic
,
+similar to IsPrint
.
+It also adds
+QuoteToGraphic
,
+QuoteRuneToGraphic
,
+AppendQuoteToGraphic
,
+and
+AppendQuoteRuneToGraphic
,
+analogous to
+QuoteToASCII
,
+QuoteRuneToASCII
,
+and so on.
+The ASCII
family escapes all space characters except ASCII space (U+0020).
+In contrast, the Graphic
family does not escape any Unicode space characters (category Zs).
+testing
package,
+when a test calls
+t.Parallel,
+that test is paused until all non-parallel tests complete, and then
+that test continues execution with all other parallel tests.
+Go 1.6 changes the time reported for such a test:
+previously the time counted only the parallel execution,
+but now it also counts the time from the start of testing
+until the call to t.Parallel
.
+text/template
package
+contains two minor changes, in addition to the major changes
+described above.
+First, it adds a new
+ExecError
type
+returned for any error during
+Execute
+that does not originate in a Write
to the underlying writer.
+Callers can distinguish template usage errors from I/O errors by checking for
+ExecError
.
+Second, the
+Funcs
method
+now checks that the names used as keys in the
+FuncMap
+are identifiers that can appear in a template function invocation.
+If not, Funcs
panics.
+time
package's
+Parse
function has always rejected any day of month larger than 31,
+such as January 32.
+In Go 1.6, Parse
now also rejects February 29 in non-leap years,
+February 30, February 31, April 31, June 31, September 31, and November 31.
++The latest Go release, version 1.7, arrives six months after 1.6. +Most of its changes are in the implementation of the toolchain, runtime, and libraries. +There is one minor change to the language specification. +As always, the release maintains the Go 1 promise of compatibility. +We expect almost all Go programs to continue to compile and run as before. +
+ ++The release adds a port to IBM LinuxOne; +updates the x86-64 compiler back end to generate more efficient code; +includes the context package, promoted from the +x/net subrepository +and now used in the standard library; +and adds support in the testing package for +creating hierarchies of tests and benchmarks. +The release also finalizes the vendoring support +started in Go 1.5, making it a standard feature. +
+ +
+There is one tiny language change in this release.
+The section on terminating statements
+clarifies that to determine whether a statement list ends in a terminating statement,
+the “final non-empty statement” is considered the end,
+matching the existing behavior of the gc and gccgo compiler toolchains.
+In earlier releases the definition referred only to the “final statement,”
+leaving the effect of trailing empty statements at the least unclear.
+The go/types
+package has been updated to match the gc and gccgo compiler toolchains
+in this respect.
+This change has no effect on the correctness of existing programs.
+
+Go 1.7 adds support for macOS 10.12 Sierra. +Binaries built with versions of Go before 1.7 will not work +correctly on Sierra. +
+ +
+Go 1.7 adds an experimental port to Linux on z Systems (linux/s390x
)
+and the beginning of a port to Plan 9 on ARM (plan9/arm
).
+
+The experimental ports to Linux on 64-bit MIPS (linux/mips64
and linux/mips64le
)
+added in Go 1.6 now have full support for cgo and external linking.
+
+The experimental port to Linux on little-endian 64-bit PowerPC (linux/ppc64le
)
+now requires the POWER8 architecture or later.
+Big-endian 64-bit PowerPC (linux/ppc64
) only requires the
+POWER5 architecture.
+
+The OpenBSD port now requires OpenBSD 5.6 or later, for access to the getentropy(2) system call. +
+ ++There are some instabilities on FreeBSD that are known but not understood. +These can lead to program crashes in rare cases. +See issue 16136, +issue 15658, +and issue 16396. +Any help in solving these FreeBSD-specific issues would be appreciated. +
+ +
+For 64-bit ARM systems, the vector register names have been
+corrected to V0
through V31
;
+previous releases incorrectly referred to them as V32
through V63
.
+
+For 64-bit x86 systems, the following instructions have been added:
+PCMPESTRI
,
+RORXL
,
+RORXQ
,
+VINSERTI128
,
+VPADDD
,
+VPADDQ
,
+VPALIGNR
,
+VPBLENDD
,
+VPERM2F128
,
+VPERM2I128
,
+VPOR
,
+VPSHUFB
,
+VPSHUFD
,
+VPSLLD
,
+VPSLLDQ
,
+VPSLLQ
,
+VPSRLD
,
+VPSRLDQ
,
+and
+VPSRLQ
.
+
+This release includes a new code generation back end for 64-bit x86 systems, +following a proposal from 2015 +that has been under development since then. +The new back end, based on +SSA, +generates more compact, more efficient code +and provides a better platform for optimizations +such as bounds check elimination. +The new back end reduces the CPU time required by +our benchmark programs by 5-35%. +
+ +
+For this release, the new back end can be disabled by passing
+-ssa=0
to the compiler.
+If you find that your program compiles or runs successfully
+only with the new back end disabled, please
+file a bug report.
+
+The format of exported metadata written by the compiler in package archives has changed: +the old textual format has been replaced by a more compact binary format. +This results in somewhat smaller package archives and fixes a few +long-standing corner case bugs. +
+ +
+For this release, the new export format can be disabled by passing
+-newexport=0
to the compiler.
+If you find that your program compiles or runs successfully
+only with the new export format disabled, please
+file a bug report.
+
+The linker's -X
option no longer supports the unusual two-argument form
+-X
name
value
,
+as announced in the Go 1.6 release
+and in warnings printed by the linker.
+Use -X
name=value
instead.
+
+The compiler and linker have been optimized and run significantly faster in this release than in Go 1.6, +although they are still slower than we would like and will continue to be optimized in future releases. +
+ ++Due to changes across the compiler toolchain and standard library, +binaries built with this release should typically be smaller than binaries +built with Go 1.6, +sometimes by as much as 20-30%. +
+ +
+On x86-64 systems, Go programs now maintain stack frame pointers
+as expected by profiling tools like Linux's perf and Intel's VTune,
+making it easier to analyze and optimize Go programs using these tools.
+The frame pointer maintenance has a small run-time overhead that varies
+but averages around 2%. We hope to reduce this cost in future releases.
+To build a toolchain that does not use frame pointers, set
+GOEXPERIMENT=noframepointer
when running
+make.bash
, make.bat
, or make.rc
.
+
+Packages using cgo may now include +Fortran source files (in addition to C, C++, Objective C, and SWIG), +although the Go bindings must still use C language APIs. +
+ +
+Go bindings may now use a new helper function C.CBytes
.
+In contrast to C.CString
, which takes a Go string
+and returns a *C.byte
(a C char*
),
+C.CBytes
takes a Go []byte
+and returns an unsafe.Pointer
(a C void*
).
+
+Packages and binaries built using cgo
have in past releases
+produced different output on each build,
+due to the embedding of temporary directory names.
+When using this release with
+new enough versions of GCC or Clang
+(those that support the -fdebug-prefix-map
option),
+those builds should finally be deterministic.
+
+Due to the alignment of Go's semiannual release schedule with GCC's annual release schedule, +GCC release 6 contains the Go 1.6.1 version of gccgo. +The next release, GCC 7, will likely have the Go 1.8 version of gccgo. +
+ +
+The go
command's basic operation
+is unchanged, but there are a number of changes worth noting.
+
+This release removes support for the GO15VENDOREXPERIMENT
environment variable,
+as announced in the Go 1.6 release.
+Vendoring support
+is now a standard feature of the go
command and toolchain.
+
+The Package
data structure made available to
+“go
list
” now includes a
+StaleReason
field explaining why a particular package
+is or is not considered stale (in need of rebuilding).
+This field is available to the -f
or -json
+options and is useful for understanding why a target is being rebuilt.
+
+The “go
get
” command now supports
+import paths referring to git.openstack.org
.
+
+This release adds experimental, minimal support for building programs using
+binary-only packages,
+packages distributed in binary form
+without the corresponding source code.
+This feature is needed in some commercial settings
+but is not intended to be fully integrated into the rest of the toolchain.
+For example, tools that assume access to complete source code
+will not work with such packages, and there are no plans to support
+such packages in the “go
get
” command.
+
+The “go
doc
” command
+now groups constructors with the type they construct,
+following godoc
.
+
+The “go
vet
” command
+has more accurate analysis in its -copylock
and -printf
checks,
+and a new -tests
check that checks the name and signature of likely test functions.
+To avoid confusion with the new -tests
check, the old, unadvertised
+-test
option has been removed; it was equivalent to -all
-shadow
.
+
+The vet
command also has a new check,
+-lostcancel
, which detects failure to call the
+cancelation function returned by the WithCancel
,
+WithTimeout
, and WithDeadline
functions in
+Go 1.7's new context
package (see below).
+Failure to call the function prevents the new Context
+from being reclaimed until its parent is cancelled.
+(The background context is never cancelled.)
+
+The new subcommand “go
tool
dist
list
”
+prints all supported operating system/architecture pairs.
+
+The “go
tool
trace
” command,
+introduced in Go 1.5,
+has been refined in various ways.
+
+First, collecting traces is significantly more efficient than in past releases. +In this release, the typical execution-time overhead of collecting a trace is about 25%; +in past releases it was at least 400%. +Second, trace files now include file and line number information, +making them more self-contained and making the +original executable optional when running the trace tool. +Third, the trace tool now breaks up large traces to avoid limits +in the browser-based viewer. +
+ ++Although the trace file format has changed in this release, +the Go 1.7 tools can still read traces from earlier releases. +
+ ++As always, the changes are so general and varied that precise statements +about performance are difficult to make. +Most programs should run a bit faster, +due to speedups in the garbage collector and +optimizations in the core library. +On x86-64 systems, many programs will run significantly faster, +due to improvements in generated code brought by the +new compiler back end. +As noted above, in our own benchmarks, +the code generation changes alone typically reduce program CPU time by 5-35%. +
+ +
+
+There have been significant optimizations bringing more than 10% improvements
+to implementations in the
+crypto/sha1
,
+crypto/sha256
,
+encoding/binary
,
+fmt
,
+hash/adler32
,
+hash/crc32
,
+hash/crc64
,
+image/color
,
+math/big
,
+strconv
,
+strings
,
+unicode
,
+and
+unicode/utf16
+packages.
+
+Garbage collection pauses should be significantly shorter than they +were in Go 1.6 for programs with large numbers of idle goroutines, +substantial stack size fluctuation, or large package-level variables. +
+ +
+Go 1.7 moves the golang.org/x/net/context
package
+into the standard library as context
.
+This allows the use of contexts for cancelation, timeouts, and passing
+request-scoped data in other standard library packages,
+including
+net,
+net/http,
+and
+os/exec,
+as noted below.
+
+For more information about contexts, see the +package documentation +and the Go blog post +“Go Concurrent Patterns: Context.” +
+ +
+Go 1.7 introduces net/http/httptrace
,
+a package that provides mechanisms for tracing events within HTTP requests.
+
+The testing
package now supports the definition
+of tests with subtests and benchmarks with sub-benchmarks.
+This support makes it easy to write table-driven benchmarks
+and to create hierarchical tests.
+It also provides a way to share common setup and tear-down code.
+See the package documentation for details.
+
+All panics started by the runtime now use panic values
+that implement both the
+builtin error
,
+and
+runtime.Error
,
+as
+required by the language specification.
+
+During panics, if a signal's name is known, it will be printed in the stack trace. +Otherwise, the signal's number will be used, as it was before Go1.7. +
+ +
+The new function
+KeepAlive
+provides an explicit mechanism for declaring
+that an allocated object must be considered reachable
+at a particular point in a program,
+typically to delay the execution of an associated finalizer.
+
+The new function
+CallersFrames
+translates a PC slice obtained from
+Callers
+into a sequence of frames corresponding to the call stack.
+This new API should be preferred instead of direct use of
+FuncForPC
,
+because the frame sequence can more accurately describe
+call stacks with inlined function calls.
+
+The new function
+SetCgoTraceback
+facilitates tighter integration between Go and C code executing
+in the same process called using cgo.
+
+On 32-bit systems, the runtime can now use memory allocated +by the operating system anywhere in the address space, +eliminating the +“memory allocated by OS not in usable range” failure +common in some environments. +
+ ++The runtime can now return unused memory to the operating system on +all architectures. +In Go 1.6 and earlier, the runtime could not +release memory on ARM64, 64-bit PowerPC, or MIPS. +
+ +
+On Windows, Go programs in Go 1.5 and earlier forced
+the global Windows timer resolution to 1ms at startup
+by calling timeBeginPeriod(1)
.
+Changing the global timer resolution caused problems on some systems,
+and testing suggested that the call was not needed for good scheduler performance,
+so Go 1.6 removed the call.
+Go 1.7 brings the call back: under some workloads the call
+is still needed for good scheduler performance.
+
+As always, there are various minor changes and updates to the library, +made with the Go 1 promise of compatibility +in mind. +
+ +
+In previous releases of Go, if
+Reader
's
+Peek
method
+were asked for more bytes than fit in the underlying buffer,
+it would return an empty slice and the error ErrBufferFull
.
+Now it returns the entire underlying buffer, still accompanied by the error ErrBufferFull
.
+
+The new functions
+ContainsAny
and
+ContainsRune
+have been added for symmetry with
+the strings
package.
+
+In previous releases of Go, if
+Reader
's
+Read
method
+were asked for zero bytes with no data remaining, it would
+return a count of 0 and no error.
+Now it returns a count of 0 and the error
+io.EOF
.
+
+The
+Reader
type has a new method
+Reset
to allow reuse of a Reader
.
+
+There are many performance optimizations throughout the package.
+Decompression speed is improved by about 10%,
+while compression for DefaultCompression
is twice as fast.
+
+In addition to those general improvements,
+the
+BestSpeed
+compressor has been replaced entirely and uses an
+algorithm similar to Snappy,
+resulting in about a 2.5X speed increase,
+although the output can be 5-10% larger than with the previous algorithm.
+
+There is also a new compression level
+HuffmanOnly
+that applies Huffman but not Lempel-Ziv encoding.
+Forgoing Lempel-Ziv encoding means that
+HuffmanOnly
runs about 3X faster than the new BestSpeed
+but at the cost of producing compressed outputs that are 20-40% larger than those
+generated by the new BestSpeed
.
+
+It is important to note that both
+BestSpeed
and HuffmanOnly
produce a compressed output that is
+RFC 1951 compliant.
+In other words, any valid DEFLATE decompressor will continue to be able to decompress these outputs.
+
+Lastly, there is a minor change to the decompressor's implementation of
+io.Reader
. In previous versions,
+the decompressor deferred reporting
+io.EOF
until exactly no more bytes could be read.
+Now, it reports
+io.EOF
more eagerly when reading the last set of bytes.
+
+The TLS implementation sends the first few data packets on each connection
+using small record sizes, gradually increasing to the TLS maximum record size.
+This heuristic reduces the amount of data that must be received before
+the first packet can be decrypted, improving communication latency over
+low-bandwidth networks.
+Setting
+Config
's
+DynamicRecordSizingDisabled
field to true
+forces the behavior of Go 1.6 and earlier, where packets are
+as large as possible from the start of the connection.
+
+The TLS client now has optional, limited support for server-initiated renegotiation,
+enabled by setting the
+Config
's
+Renegotiation
field.
+This is needed for connecting to many Microsoft Azure servers.
+
+The errors returned by the package now consistently begin with a
+tls:
prefix.
+In past releases, some errors used a crypto/tls:
prefix,
+some used a tls:
prefix, and some had no prefix at all.
+
+When generating self-signed certificates, the package no longer sets the +“Authority Key Identifier” field by default. +
+
+The new function
+SystemCertPool
+provides access to the entire system certificate pool if available.
+There is also a new associated error type
+SystemRootsError
.
+
+The
+Reader
type's new
+SeekPC
method and the
+Data
type's new
+Ranges
method
+help to find the compilation unit to pass to a
+LineReader
+and to identify the specific function for a given program counter.
+
+The new
+R_390
relocation type
+and its many predefined constants
+support the S390 port.
+
+The ASN.1 decoder now rejects non-minimal integer encodings. +This may cause the package to reject some invalid but formerly accepted ASN.1 data. +
+
+The
+Encoder
's new
+SetIndent
method
+sets the indentation parameters for JSON encoding,
+like in the top-level
+Indent
function.
+
+The
+Encoder
's new
+SetEscapeHTML
method
+controls whether the
+&
, <
, and >
+characters in quoted strings should be escaped as
+\u0026
, \u003c
, and \u003e
,
+respectively.
+As in previous releases, the encoder defaults to applying this escaping,
+to avoid certain problems that can arise when embedding JSON in HTML.
+
+In earlier versions of Go, this package only supported encoding and decoding
+maps using keys with string types.
+Go 1.7 adds support for maps using keys with integer types:
+the encoding uses a quoted decimal representation as the JSON key.
+Go 1.7 also adds support for encoding maps using non-string keys that implement
+the MarshalText
+(see
+encoding.TextMarshaler
)
+method,
+as well as support for decoding maps using non-string keys that implement
+the UnmarshalText
+(see
+encoding.TextUnmarshaler
)
+method.
+These methods are ignored for keys with string types in order to preserve
+the encoding and decoding used in earlier versions of Go.
+
+When encoding a slice of typed bytes,
+Marshal
+now generates an array of elements encoded using
+that byte type's
+MarshalJSON
+or
+MarshalText
+method if present,
+only falling back to the default base64-encoded string data if neither method is available.
+Earlier versions of Go accept both the original base64-encoded string encoding
+and the array encoding (assuming the byte type also implements
+UnmarshalJSON
+or
+UnmarshalText
+as appropriate),
+so this change should be semantically backwards compatible with earlier versions of Go,
+even though it does change the chosen encoding.
+
+To implement the go command's new support for binary-only packages
+and for Fortran code in cgo-based packages,
+the
+Package
type
+adds new fields BinaryOnly
, CgoFFLAGS
, and FFiles
.
+
+To support the corresponding change in go
test
described above,
+Example
struct adds a Unordered field
+indicating whether the example may generate its output lines in any order.
+
+The package adds new constants
+SeekStart
, SeekCurrent
, and SeekEnd
,
+for use with
+Seeker
+implementations.
+These constants are preferred over os.SEEK_SET
, os.SEEK_CUR
, and os.SEEK_END
,
+but the latter will be preserved for compatibility.
+
+The
+Float
type adds
+GobEncode
and
+GobDecode
methods,
+so that values of type Float
can now be encoded and decoded using the
+encoding/gob
+package.
+
+The
+Read
function and
+Rand
's
+Read
method
+now produce a pseudo-random stream of bytes that is consistent and not
+dependent on the size of the input buffer.
+
+The documentation clarifies that
+Rand's Seed
+and Read
methods
+are not safe to call concurrently, though the global
+functions Seed
+and Read
are (and have
+always been) safe.
+
+The
+Writer
+implementation now emits each multipart section's header sorted by key.
+Previously, iteration over a map caused the section header to use a
+non-deterministic order.
+
+As part of the introduction of context, the
+Dialer
type has a new method
+DialContext
, like
+Dial
but adding the
+context.Context
+for the dial operation.
+The context is intended to obsolete the Dialer
's
+Cancel
and Deadline
fields,
+but the implementation continues to respect them,
+for backwards compatibility.
+
+The
+IP
type's
+String
method has changed its result for invalid IP
addresses.
+In past releases, if an IP
byte slice had length other than 0, 4, or 16, String
+returned "?"
.
+Go 1.7 adds the hexadecimal encoding of the bytes, as in "?12ab"
.
+
+The pure Go name resolution
+implementation now respects nsswitch.conf
's
+stated preference for the priority of DNS lookups compared to
+local file (that is, /etc/hosts
) lookups.
+
+ResponseWriter
's
+documentation now makes clear that beginning to write the response
+may prevent future reads on the request body.
+For maximal compatibility, implementations are encouraged to
+read the request body completely before writing any part of the response.
+
+As part of the introduction of context, the
+Request
has a new methods
+Context
, to retrieve the associated context, and
+WithContext
, to construct a copy of Request
+with a modified context.
+
+In the
+Server
implementation,
+Serve
records in the request context
+both the underlying *Server
using the key ServerContextKey
+and the local address on which the request was received (a
+Addr
) using the key LocalAddrContextKey
.
+For example, the address on which a request received is
+req.Context().Value(http.LocalAddrContextKey).(net.Addr)
.
+
+The server's Serve
method
+now only enables HTTP/2 support if the Server.TLSConfig
field is nil
+or includes "h2"
in its TLSConfig.NextProtos
.
+
+The server implementation now
+pads response codes less than 100 to three digits
+as required by the protocol,
+so that w.WriteHeader(5)
uses the HTTP response
+status 005
, not just 5
.
+
+The server implementation now correctly sends only one "Transfer-Encoding" header when "chunked" +is set explicitly, following RFC 7230. +
+ ++The server implementation is now stricter about rejecting requests with invalid HTTP versions. +Invalid requests claiming to be HTTP/0.x are now rejected (HTTP/0.9 was never fully supported), +and plaintext HTTP/2 requests other than the "PRI * HTTP/2.0" upgrade request are now rejected as well. +The server continues to handle encrypted HTTP/2 requests. +
+ ++In the server, a 200 status code is sent back by the timeout handler on an empty +response body, instead of sending back 0 as the status code. +
+ +
+In the client, the
+Transport
implementation passes the request context
+to any dial operation connecting to the remote server.
+If a custom dialer is needed, the new Transport
field
+DialContext
is preferred over the existing Dial
field,
+to allow the transport to supply a context.
+
+The
+Transport
also adds fields
+IdleConnTimeout
,
+MaxIdleConns
,
+and
+MaxResponseHeaderBytes
+to help control client resources consumed
+by idle or chatty servers.
+
+A
+Client
's configured CheckRedirect
function can now
+return ErrUseLastResponse
to indicate that the
+most recent redirect response should be returned as the
+result of the HTTP request.
+That response is now available to the CheckRedirect
function
+as req.Response
.
+
+Since Go 1, the default behavior of the HTTP client is
+to request server-side compression
+using the Accept-Encoding
request header
+and then to decompress the response body transparently,
+and this behavior is adjustable using the
+Transport
's DisableCompression
field.
+In Go 1.7, to aid the implementation of HTTP proxies, the
+Response
's new
+Uncompressed
field reports whether
+this transparent decompression took place.
+
+DetectContentType
+adds support for a few new audio and video content types.
+
+The
+Handler
+adds a new field
+Stderr
+that allows redirection of the child process's
+standard error away from the host process's
+standard error.
+
+The new function
+NewRequest
+prepares a new
+http.Request
+suitable for passing to an
+http.Handler
during a test.
+
+The
+ResponseRecorder
's new
+Result
method
+returns the recorded
+http.Response
.
+Tests that need to check the response's headers or trailers
+should call Result
and inspect the response fields
+instead of accessing
+ResponseRecorder
's HeaderMap
directly.
+
+The
+ReverseProxy
implementation now responds with “502 Bad Gateway”
+when it cannot reach a back end; in earlier releases it responded with “500 Internal Server Error.”
+
+Both
+ClientConn
and
+ServerConn
have been documented as deprecated.
+They are low-level, old, and unused by Go's current HTTP stack
+and will no longer be updated.
+Programs should use
+http.Client
,
+http.Transport
,
+and
+http.Server
+instead.
+
+The runtime trace HTTP handler, installed to handle the path /debug/pprof/trace
,
+now accepts a fractional number in its seconds
query parameter,
+allowing collection of traces for intervals smaller than one second.
+This is especially useful on busy servers.
+
+The address parser now allows unescaped UTF-8 text in addresses
+following RFC 6532,
+but it does not apply any normalization to the result.
+For compatibility with older mail parsers,
+the address encoder, namely
+Address
's
+String
method,
+continues to escape all UTF-8 text following RFC 5322.
+
+The ParseAddress
+function and
+the AddressParser.Parse
+method are stricter.
+They used to ignore any characters following an e-mail address, but
+will now return an error for anything other than whitespace.
+
+The
+URL
's
+new ForceQuery
field
+records whether the URL must have a query string,
+in order to distinguish URLs without query strings (like /search
)
+from URLs with empty query strings (like /search?
).
+
+IsExist
now returns true for syscall.ENOTEMPTY
,
+on systems where that error exists.
+
+On Windows,
+Remove
now removes read-only files when possible,
+making the implementation behave as on
+non-Windows systems.
+
+As part of the introduction of context,
+the new constructor
+CommandContext
+is like
+Command
but includes a context that can be used to cancel the command execution.
+
+The
+Current
+function is now implemented even when cgo is not available.
+
+The new
+Group
type,
+along with the lookup functions
+LookupGroup
and
+LookupGroupId
+and the new field GroupIds
in the User
struct,
+provides access to system-specific user group information.
+
+Although
+Value
's
+Field
method has always been documented to panic
+if the given field number i
is out of range, it has instead
+silently returned a zero
+Value
.
+Go 1.7 changes the method to behave as documented.
+
+The new
+StructOf
+function constructs a struct type at run time.
+It completes the set of type constructors, joining
+ArrayOf
,
+ChanOf
,
+FuncOf
,
+MapOf
,
+PtrTo
,
+and
+SliceOf
.
+
+StructTag
's
+new method
+Lookup
+is like
+Get
+but distinguishes the tag not containing the given key
+from the tag associating an empty string with the given key.
+
+The
+Method
and
+NumMethod
+methods of
+Type
and
+Value
+no longer return or count unexported methods.
+
+In previous releases of Go, if
+Reader
's
+Read
method
+were asked for zero bytes with no data remaining, it would
+return a count of 0 and no error.
+Now it returns a count of 0 and the error
+io.EOF
.
+
+The
+Reader
type has a new method
+Reset
to allow reuse of a Reader
.
+
+Duration
's
+time.Duration.String method now reports the zero duration as "0s"
, not "0"
.
+ParseDuration
continues to accept both forms.
+
+The method call time.Local.String()
now returns "Local"
on all systems;
+in earlier releases, it returned an empty string on Windows.
+
+The time zone database in
+$GOROOT/lib/time
has been updated
+to IANA release 2016d.
+This fallback database is only used when the system time zone database
+cannot be found, for example on Windows.
+The Windows time zone abbreviation list has also been updated.
+
+On Linux, the
+SysProcAttr
struct
+(as used in
+os/exec.Cmd
's SysProcAttr
field)
+has a new Unshareflags
field.
+If the field is nonzero, the child process created by
+ForkExec
+(as used in exec.Cmd
's Run
method)
+will call the
+unshare(2)
+system call before executing the new program.
+
+The unicode
package and associated
+support throughout the system has been upgraded from version 8.0 to
+Unicode 9.0.
+
+The latest Go release, version 1.8, arrives six months after Go 1.7. +Most of its changes are in the implementation of the toolchain, runtime, and libraries. +There are two minor changes to the language specification. +As always, the release maintains the Go 1 promise of compatibility. +We expect almost all Go programs to continue to compile and run as before. +
+ ++The release adds support for 32-bit MIPS, +updates the compiler back end to generate more efficient code, +reduces GC pauses by eliminating stop-the-world stack rescanning, +adds HTTP/2 Push support, +adds HTTP graceful shutdown, +adds more context support, +enables profiling mutexes, +and simplifies sorting slices. +
+ ++ When explicitly converting a value from one struct type to another, + as of Go 1.8 the tags are ignored. Thus two structs that differ + only in their tags may be converted from one to the other: +
+ ++func example() { + type T1 struct { + X int `json:"foo"` + } + type T2 struct { + X int `json:"bar"` + } + var v1 T1 + var v2 T2 + v1 = T1(v2) // now legal +} ++ + +
+ The language specification now only requires that implementations
+ support up to 16-bit exponents in floating-point constants. This does not affect
+ either the “gc
” or
+ gccgo
compilers, both of
+ which still support 32-bit exponents.
+
+Go now supports 32-bit MIPS on Linux for both big-endian
+(linux/mips
) and little-endian machines
+(linux/mipsle
) that implement the MIPS32r1 instruction set with FPU
+or kernel FPU emulation. Note that many common MIPS-based routers lack an FPU and
+have firmware that doesn't enable kernel FPU emulation; Go won't run on such machines.
+
+On DragonFly BSD, Go now requires DragonFly 4.4.4 or later. +
+ ++On OpenBSD, Go now requires OpenBSD 5.9 or later. +
+ ++The Plan 9 port's networking support is now much more complete +and matches the behavior of Unix and Windows with respect to deadlines +and cancelation. For Plan 9 kernel requirements, see the +Plan 9 wiki page. +
+ ++ Go 1.8 now only supports OS X 10.8 or later. This is likely the last + Go release to support 10.8. Compiling Go or running + binaries on older OS X versions is untested. +
+ +
+ Go 1.8 will be the last release to support Linux on ARMv5E and ARMv6 processors:
+ Go 1.9 will likely require the ARMv6K (as found in the Raspberry Pi 1) or later.
+ To identify whether a Linux system is ARMv6K or later, run
+ “go
tool
dist
-check-armv6k
”
+ (to facilitate testing, it is also possible to just copy the dist
command to the
+ system without installing a full copy of Go 1.8)
+ and if the program terminates with output "ARMv6K supported." then the system
+ implements ARMv6K or later.
+ Go on non-Linux ARM systems already requires ARMv6K or later.
+
+There are some instabilities on FreeBSD and NetBSD that are known but not understood. +These can lead to program crashes in rare cases. +See +issue 15658 and +issue 16511. +Any help in solving these issues would be appreciated. +
+ +
+For 64-bit x86 systems, the following instructions have been added:
+VBROADCASTSD
,
+BROADCASTSS
,
+MOVDDUP
,
+MOVSHDUP
,
+MOVSLDUP
,
+VMOVDDUP
,
+VMOVSHDUP
, and
+VMOVSLDUP
.
+
+For 64-bit PPC systems, the common vector scalar instructions have been
+added:
+LXS
,
+LXSDX
,
+LXSI
,
+LXSIWAX
,
+LXSIWZX
,
+LXV
,
+LXVD2X
,
+LXVDSX
,
+LXVW4X
,
+MFVSR
,
+MFVSRD
,
+MFVSRWZ
,
+MTVSR
,
+MTVSRD
,
+MTVSRWA
,
+MTVSRWZ
,
+STXS
,
+STXSDX
,
+STXSI
,
+STXSIWX
,
+STXV
,
+STXVD2X
,
+STXVW4X
,
+XSCV
,
+XSCVDPSP
,
+XSCVDPSPN
,
+XSCVDPSXDS
,
+XSCVDPSXWS
,
+XSCVDPUXDS
,
+XSCVDPUXWS
,
+XSCVSPDP
,
+XSCVSPDPN
,
+XSCVSXDDP
,
+XSCVSXDSP
,
+XSCVUXDDP
,
+XSCVUXDSP
,
+XSCVX
,
+XSCVXP
,
+XVCV
,
+XVCVDPSP
,
+XVCVDPSXDS
,
+XVCVDPSXWS
,
+XVCVDPUXDS
,
+XVCVDPUXWS
,
+XVCVSPDP
,
+XVCVSPSXDS
,
+XVCVSPSXWS
,
+XVCVSPUXDS
,
+XVCVSPUXWS
,
+XVCVSXDDP
,
+XVCVSXDSP
,
+XVCVSXWDP
,
+XVCVSXWSP
,
+XVCVUXDDP
,
+XVCVUXDSP
,
+XVCVUXWDP
,
+XVCVUXWSP
,
+XVCVX
,
+XVCVXP
,
+XXLAND
,
+XXLANDC
,
+XXLANDQ
,
+XXLEQV
,
+XXLNAND
,
+XXLNOR
,
+XXLOR
,
+XXLORC
,
+XXLORQ
,
+XXLXOR
,
+XXMRG
,
+XXMRGHW
,
+XXMRGLW
,
+XXPERM
,
+XXPERMDI
,
+XXSEL
,
+XXSI
,
+XXSLDWI
,
+XXSPLT
, and
+XXSPLTW
.
+
+The yacc
tool (previously available by running
+“go
tool
yacc
”) has been removed.
+As of Go 1.7 it was no longer used by the Go compiler.
+It has moved to the “tools” repository and is now available at
+golang.org/x/tools/cmd/goyacc
.
+
+ The fix
tool has a new “context
”
+ fix to change imports from “golang.org/x/net/context
”
+ to “context
”.
+
+ The pprof
tool can now profile TLS servers
+ and skip certificate validation by using the “https+insecure
”
+ URL scheme.
+
+ The callgrind output now has instruction-level granularity. +
+ +
+ The trace
tool has a new -pprof
flag for
+ producing pprof-compatible blocking and latency profiles from an
+ execution trace.
+
+ Garbage collection events are now shown more clearly in the + execution trace viewer. Garbage collection activity is shown on its + own row and GC helper goroutines are annotated with their roles. +
+ +Vet is stricter in some ways and looser where it + previously caused false positives.
+ +Vet now checks for copying an array of locks,
+ duplicate JSON and XML struct field tags,
+ non-space-separated struct tags,
+ deferred calls to HTTP Response.Body.Close
+ before checking errors, and
+ indexed arguments in Printf
.
+ It also improves existing checks.
+Go 1.7 introduced a new compiler back end for 64-bit x86 systems. +In Go 1.8, that back end has been developed further and is now used for +all architectures. +
+ ++The new back end, based on +static single assignment form (SSA), +generates more compact, more efficient code +and provides a better platform for optimizations +such as bounds check elimination. +The new back end reduces the CPU time required by +our benchmark programs by 20-30% +on 32-bit ARM systems. For 64-bit x86 systems, which already used the SSA back end in +Go 1.7, the gains are a more modest 0-10%. Other architectures will likely +see improvements closer to the 32-bit ARM numbers. +
+ +
+ The temporary -ssa=0
compiler flag introduced in Go 1.7
+ to disable the new back end has been removed in Go 1.8.
+
+ In addition to enabling the new compiler back end for all systems, + Go 1.8 also introduces a new compiler front end. The new compiler + front end should not be noticeable to users but is the foundation for + future performance work. +
+ ++ The compiler and linker have been optimized and run faster in this + release than in Go 1.7, although they are still slower than we would + like and will continue to be optimized in future releases. + Compared to the previous release, Go 1.8 is + about 15% faster. +
+ +
+The Go tool now remembers the value of the CGO_ENABLED
environment
+variable set during make.bash
and applies it to all future compilations
+by default to fix issue #12808.
+When doing native compilation, it is rarely necessary to explicitly set
+the CGO_ENABLED
environment variable as make.bash
+will detect the correct setting automatically. The main reason to explicitly
+set the CGO_ENABLED
environment variable is when your environment
+supports cgo, but you explicitly do not want cgo support, in which case, set
+CGO_ENABLED=0
during make.bash
or all.bash
.
+
+The environment variable PKG_CONFIG
may now be used to
+set the program to run to handle #cgo
pkg-config
+directives. The default is pkg-config
, the program
+always used by earlier releases. This is intended to make it easier
+to cross-compile
+cgo code.
+
+The cgo tool now supports a -srcdir
+option, which is used by the go command.
+
+If cgo code calls C.malloc
, and
+malloc
returns NULL
, the program will now
+crash with an out of memory error.
+C.malloc
will never return nil
.
+Unlike most C functions, C.malloc
may not be used in a
+two-result form returning an errno value.
+
+If cgo is used to call a C function passing a +pointer to a C union, and if the C union can contain any pointer +values, and if cgo pointer +checking is enabled (as it is by default), the union value is now +checked for Go pointers. +
+ ++Due to the alignment of Go's semiannual release schedule with GCC's +annual release schedule, +GCC release 6 contains the Go 1.6.1 version of gccgo. +We expect that the next release, GCC 7, will contain the Go 1.8 +version of gccgo. +
+ +
+ The
+ GOPATH
+ environment variable now has a default value if it
+ is unset. It defaults to
+ $HOME/go
on Unix and
+ %USERPROFILE%/go
on Windows.
+
+ The “go
get
” command now always respects
+ HTTP proxy environment variables, regardless of whether
+ the -insecure
flag is used. In previous releases, the
+ -insecure
flag had the side effect of not using proxies.
+
+ The new
+ “go
bug
”
+ command starts a bug report on GitHub, prefilled
+ with information about the current system.
+
+ The
+ “go
doc
”
+ command now groups constants and variables with their type,
+ following the behavior of
+ godoc
.
+
+ In order to improve the readability of doc
's
+ output, each summary of the first-level items is guaranteed to
+ occupy a single line.
+
+ Documentation for a specific method in an interface definition can
+ now be requested, as in
+ “go
doc
net.Conn.SetDeadline
”.
+
+ Go now provides early support for plugins with a “plugin
”
+ build mode for generating plugins written in Go, and a
+ new plugin
package for
+ loading such plugins at run time. Plugin support is currently only
+ available on Linux. Please report any issues.
+
+ The garbage collector no longer considers
+ arguments live throughout the entirety of a function. For more
+ information, and for how to force a variable to remain live, see
+ the runtime.KeepAlive
+ function added in Go 1.7.
+
+ Updating:
+ Code that sets a finalizer on an allocated object may need to add
+ calls to runtime.KeepAlive
in functions or methods
+ using that object.
+ Read the
+ KeepAlive
+ documentation and its example for more details.
+
+In Go 1.6, the runtime +added lightweight, +best-effort detection of concurrent misuse of maps. This release +improves that detector with support for detecting programs that +concurrently write to and iterate over a map. +
++As always, if one goroutine is writing to a map, no other goroutine should be +reading (which includes iterating) or writing the map concurrently. +If the runtime detects this condition, it prints a diagnosis and crashes the program. +The best way to find out more about the problem is to run the program +under the +race detector, +which will more reliably identify the race +and give more detail. +
+ +
+ The runtime.MemStats
+ type has been more thoroughly documented.
+
+As always, the changes are so general and varied that precise statements +about performance are difficult to make. +Most programs should run a bit faster, +due to speedups in the garbage collector and +optimizations in the standard library. +
+ +
+There have been optimizations to implementations in the
+bytes
,
+crypto/aes
,
+crypto/cipher
,
+crypto/elliptic
,
+crypto/sha256
,
+crypto/sha512
,
+encoding/asn1
,
+encoding/csv
,
+encoding/hex
,
+encoding/json
,
+hash/crc32
,
+image/color
,
+image/draw
,
+math
,
+math/big
,
+reflect
,
+regexp
,
+runtime
,
+strconv
,
+strings
,
+syscall
,
+text/template
, and
+unicode/utf8
+packages.
+
+ Garbage collection pauses should be significantly shorter than they + were in Go 1.7, usually under 100 microseconds and often as low as + 10 microseconds. + See the + document on eliminating stop-the-world stack re-scanning + for details. More work remains for Go 1.9. +
+ ++ The overhead of deferred + function calls has been reduced by about half. +
+ +The overhead of calls from Go into C has been reduced by about half.
+ ++Examples have been added to the documentation across many packages. +
+ +
+The sort package
+now includes a convenience function
+Slice
to sort a
+slice given a less function.
+
+In many cases this means that writing a new sorter type is not
+necessary.
+
+Also new are
+SliceStable
and
+SliceIsSorted
.
+
+The net/http package now includes a
+mechanism to
+send HTTP/2 server pushes from a
+Handler
.
+Similar to the existing Flusher
and Hijacker
+interfaces, an HTTP/2
+ResponseWriter
+now implements the new
+Pusher
interface.
+
+ The HTTP Server now has support for graceful shutdown using the new
+ Server.Shutdown
+ method and abrupt shutdown using the new
+ Server.Close
+ method.
+
+ Continuing Go 1.7's adoption
+ of context.Context
+ into the standard library, Go 1.8 adds more context support
+ to existing packages:
+
Server.Shutdown
+ takes a context argument.Lookup
methods on the new
+ net.Resolver
now
+ take a context.+ The runtime and tools now support profiling contended mutexes. +
+ +
+ Most users will want to use the new -mutexprofile
+ flag with “go
test
”,
+ and then use pprof on the resultant file.
+
+ Lower-level support is also available via the new
+ MutexProfile
+ and
+ SetMutexProfileFraction
.
+
+ A known limitation for Go 1.8 is that the profile only reports contention for
+ sync.Mutex
,
+ not
+ sync.RWMutex
.
+
+As always, there are various minor changes and updates to the library, +made with the Go 1 promise of compatibility +in mind. The following sections list the user visible changes and additions. +Optimizations and minor bug fixes are not listed. +
+ +
+ The tar implementation corrects many bugs in corner cases of the file format.
+ The Reader
+ is now able to process tar files in the PAX format with entries larger than 8GB.
+ The Writer
+ no longer produces invalid tar files in some situations involving long pathnames.
+
+ There have been some minor fixes to the encoder to improve the
+ compression ratio in certain situations. As a result, the exact
+ encoded output of DEFLATE
may be different from Go 1.7. Since
+ DEFLATE
is the underlying compression of gzip, png, zlib, and zip,
+ those formats may have changed outputs.
+
+ The encoder, when operating in
+ NoCompression
+ mode, now produces a consistent output that is not dependent on
+ the size of the slices passed to the
+ Write
+ method.
+
+ The decoder, upon encountering an error, now returns any + buffered data it had uncompressed along with the error. +
+ +
+ The Writer
+ now encodes a zero MTIME
field when
+ the Header.ModTime
+ field is the zero value.
+
+ In previous releases of Go, the Writer
would encode
+ a nonsensical value.
+
+ Similarly,
+ the Reader
+ now reports a zero encoded MTIME
field as a zero
+ Header.ModTime
.
+
+ The DeadlineExceeded
+ error now implements
+ net.Error
+ and reports true for both the Timeout
and
+ Temporary
methods.
+
+ The new method
+ Conn.CloseWrite
+ allows TLS connections to be half closed.
+
+ The new method
+ Config.Clone
+ clones a TLS configuration.
+
+
+ The new Config.GetConfigForClient
+ callback allows selecting a configuration for a client dynamically, based
+ on the client's
+ ClientHelloInfo
.
+
+
+ The ClientHelloInfo
+ struct now has new
+ fields Conn
, SignatureSchemes
(using
+ the new
+ type SignatureScheme
),
+ SupportedProtos
, and SupportedVersions
.
+
+ The new Config.GetClientCertificate
+ callback allows selecting a client certificate based on the server's
+ TLS CertificateRequest
message, represented by the new
+ CertificateRequestInfo
.
+
+ The new
+ Config.KeyLogWriter
+ allows debugging TLS connections
+ in WireShark and
+ similar tools.
+
+ The new
+ Config.VerifyPeerCertificate
+ callback allows additional validation of a peer's presented certificate.
+
+ The crypto/tls
package now implements basic
+ countermeasures against CBC padding oracles. There should be
+ no explicit secret-dependent timings, but it does not attempt to
+ normalize memory accesses to prevent cache timing leaks.
+
+ The crypto/tls
package now supports
+ X25519 and
+ ChaCha20-Poly1305.
+ ChaCha20-Poly1305 is now prioritized unless
+ hardware support for AES-GCM is present.
+
+ AES-128-CBC cipher suites with SHA-256 are also + now supported, but disabled by default. +
+ ++ PSS signatures are now supported. +
+ +
+ UnknownAuthorityError
+ now has a Cert
field, reporting the untrusted
+ certificate.
+
+ Certificate validation is more permissive in a few cases and + stricter in a few other cases. + +
+ +
+ Root certificates will now also be looked for
+ at /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
+ on Linux, to support RHEL and CentOS.
+
+ The package now supports context.Context
. There are new methods
+ ending in Context
such as
+ DB.QueryContext
and
+ DB.PrepareContext
+ that take context arguments. Using the new Context
methods ensures that
+ connections are closed and returned to the connection pool when the
+ request is done; enables canceling in-progress queries
+ should the driver support that; and allows the database
+ pool to cancel waiting for the next available connection.
+
+ The IsolationLevel
+ can now be set when starting a transaction by setting the isolation level
+ on TxOptions.Isolation
and passing
+ it to DB.BeginTx
.
+ An error will be returned if an isolation level is selected that the driver
+ does not support. A read-only attribute may also be set on the transaction
+ by setting TxOptions.ReadOnly
+ to true.
+
+ Queries now expose the SQL column type information for drivers that support it.
+ Rows can return ColumnTypes
+ which can include SQL type information, column type lengths, and the Go type.
+
+ A Rows
+ can now represent multiple result sets. After
+ Rows.Next
returns false,
+ Rows.NextResultSet
+ may be called to advance to the next result set. The existing Rows
+ should continue to be used after it advances to the next result set.
+
+ NamedArg
may be used
+ as query arguments. The new function Named
+ helps create a NamedArg
+ more succinctly.
+
+ If a driver supports the new
+ Pinger
+ interface, the
+ DB.Ping
+ and
+ DB.PingContext
+ methods will use that interface to check whether a
+ database connection is still valid.
+
+ The new Context
query methods work for all drivers, but
+ Context
cancelation is not responsive unless the driver has been
+ updated to use them. The other features require driver support in
+ database/sql/driver
.
+ Driver authors should review the new interfaces. Users of existing
+ driver should review the driver documentation to see what
+ it supports and any system specific documentation on each feature.
+
+ The package has been extended and is now used by
+ the Go linker to read gcc
-generated object files.
+ The new
+ File.StringTable
+ and
+ Section.Relocs
+ fields provide access to the COFF string table and COFF relocations.
+ The new
+ File.COFFSymbols
+ allows low-level access to the COFF symbol table.
+
+ The new
+ Encoding.Strict
+ method returns an Encoding
that causes the decoder
+ to return an error when the trailing padding bits are not zero.
+
+ UnmarshalTypeError
+ now includes the struct and field name.
+
+ A nil Marshaler
+ now marshals as a JSON null
value.
+
+ A RawMessage
value now
+ marshals the same as its pointer type.
+
+ Marshal
+ encodes floating-point numbers using the same format as in ES6,
+ preferring decimal (not exponential) notation for a wider range of values.
+ In particular, all floating-point integers up to 264 format the
+ same as the equivalent int64
representation.
+
+ In previous versions of Go, unmarshaling a JSON null
into an
+ Unmarshaler
+ was considered a no-op; now the Unmarshaler
's
+ UnmarshalJSON
method is called with the JSON literal
+ null
and can define the semantics of that case.
+
+ Decode
+ is now strict about the format of the ending line.
+
+ Unmarshal
+ now has wildcard support for collecting all attributes using
+ the new ",any,attr"
struct tag.
+
+ The new methods
+ Int.Value
,
+ String.Value
,
+ Float.Value
, and
+ Func.Value
+ report the current value of an exported variable.
+
+ The new
+ function Handler
+ returns the package's HTTP handler, to enable installing it in
+ non-standard locations.
+
+ Scanf
,
+ Fscanf
, and
+ Sscanf
now
+ handle spaces differently and more consistently than
+ previous releases. See the
+ scanning documentation
+ for details.
+
+ The new IsPredeclared
+ function reports whether a string is a predeclared identifier.
+
+ The new function
+ Default
+ returns the default "typed" type for an "untyped" type.
+
+ The alignment of complex64
now matches
+ the Go compiler.
+
+ The package now validates
+ the "type"
attribute on
+ a <script>
tag.
+
+ Decode
+ (and DecodeConfig
)
+ now supports True Color and grayscale transparency.
+
+ Encoder
+ is now faster and creates smaller output
+ when encoding paletted images.
+
+ The new method
+ Int.Sqrt
+ calculates ⌊√x⌋.
+
+ The new method
+ Float.Scan
+ is a support routine for
+ fmt.Scanner
.
+
+ Int.ModInverse
+ now supports negative numbers.
+
+ The new Rand.Uint64
+ method returns uint64
values. The
+ new Source64
+ interface describes sources capable of generating such values
+ directly; otherwise the Rand.Uint64
method
+ constructs a uint64
from two calls
+ to Source
's
+ Int63
method.
+
+ ParseMediaType
+ now preserves unnecessary backslash escapes as literals,
+ in order to support MSIE.
+ When MSIE sends a full file path (in “intranet mode”), it does not
+ escape backslashes: “C:\dev\go\foo.txt
”, not
+ “C:\\dev\\go\\foo.txt
”.
+ If we see an unnecessary backslash escape, we now assume it is from MSIE
+ and intended as a literal backslash.
+ No known MIME generators emit unnecessary backslash escapes
+ for simple token characters like numbers and letters.
+
+ The
+ Reader
's
+ parsing has been relaxed in two ways to accept
+ more input seen in the wild.
+
+
+ First, it accepts an equals sign (=
) not followed
+ by two hex digits as a literal equal sign.
+
+
+ Second, it silently ignores a trailing equals sign at the end of
+ an encoded input.
+
+ The Conn
documentation
+ has been updated to clarify expectations of an interface
+ implementation. Updates in the net/http
packages
+ depend on implementations obeying the documentation.
+
Updating: implementations of the Conn
interface should verify
+ they implement the documented semantics. The
+ golang.org/x/net/nettest
+ package will exercise a Conn
and validate it behaves properly.
+
+ The new method
+ UnixListener.SetUnlinkOnClose
+ sets whether the underlying socket file should be removed from the file system when
+ the listener is closed.
+
+ The new Buffers
type permits
+ writing to the network more efficiently from multiple discontiguous buffers
+ in memory. On certain machines, for certain types of connections,
+ this is optimized into an OS-specific batch write operation (such as writev
).
+
+ The new Resolver
looks up names and numbers
+ and supports context.Context
.
+ The Dialer
now has an optional
+ Resolver
field.
+
+ Interfaces
is now supported on Solaris.
+
+ The Go DNS resolver now supports resolv.conf
's “rotate
”
+ and “option
ndots:0
” options. The “ndots
” option is
+ now respected in the same way as libresolve
.
+
Server changes:
+Server
+ adds configuration options
+ ReadHeaderTimeout
and IdleTimeout
+ and documents WriteTimeout
.
+ FileServer
+ and
+ ServeContent
+ now support HTTP If-Match
conditional requests,
+ in addition to the previous If-None-Match
+ support for ETags properly formatted according to RFC 7232, section 2.3.
+
+ There are several additions to what a server's Handler
can do:
+
Context
+ returned
+ by Request.Context
+ is canceled if the underlying net.Conn
+ closes. For instance, if the user closes their browser in the
+ middle of a slow request, the Handler
can now
+ detect that the user is gone. This complements the
+ existing CloseNotifier
+ support. This functionality requires that the underlying
+ net.Conn
implements
+ recently clarified interface documentation.
+ TrailerPrefix
+ mechanism.
+ Handler
can now abort a response by panicking
+ with the error
+ ErrAbortHandler
.
+ Write
of zero bytes to a
+ ResponseWriter
+ is now defined as a
+ way to test whether a ResponseWriter
has been hijacked:
+ if so, the Write
returns
+ ErrHijacked
+ without printing an error
+ to the server's error log.
+ Client & Transport changes:
+Client
+ now copies most request headers on redirect. See
+ the documentation
+ on the Client
type for details.
+ Transport
+ now supports international domain names. Consequently, so do
+ Get and other helpers.
+ Client
now supports 301, 307, and 308 redirects.
+
+ For example, Client.Post
now follows 301
+ redirects, converting them to GET
requests
+ without bodies, like it did for 302 and 303 redirect responses
+ previously.
+
+ The Client
now also follows 307 and 308
+ redirects, preserving the original request method and body, if
+ any. If the redirect requires resending the request body, the
+ request must have the new
+ Request.GetBody
+ field defined.
+ NewRequest
+ sets Request.GetBody
automatically for common
+ body types.
+ Transport
now rejects requests for URLs with
+ ports containing non-digit characters.
+ Transport
will now retry non-idempotent
+ requests if no bytes were written before a network failure
+ and the request has no body.
+ Transport.ProxyConnectHeader
+ allows configuration of header values to send to a proxy
+ during a CONNECT
request.
+ DefaultTransport.Dialer
+ now enables DualStack
("Happy Eyeballs") support,
+ allowing the use of IPv4 as a backup if it looks like IPv6 might be
+ failing.
+ Transport
+ no longer reads a byte of a non-nil
+ Request.Body
+ when the
+ Request.ContentLength
+ is zero to determine whether the ContentLength
+ is actually zero or just undefined.
+ To explicitly signal that a body has zero length,
+ either set it to nil
, or set it to the new value
+ NoBody
.
+ The new NoBody
value is intended for use by Request
+ constructor functions; it is used by
+ NewRequest
.
+
+ There is now support for tracing a client request's TLS handshakes with
+ the new
+ ClientTrace.TLSHandshakeStart
+ and
+ ClientTrace.TLSHandshakeDone
.
+
+ The ReverseProxy
+ has a new optional hook,
+ ModifyResponse
,
+ for modifying the response from the back end before proxying it to the client.
+
+ Empty quoted strings are once again allowed in the name part of
+ an address. That is, Go 1.4 and earlier accepted
+ ""
<gopher@example.com>
,
+ but Go 1.5 introduced a bug that rejected this address.
+ The address is recognized again.
+
+ The
+ Header.Date
+ method has always provided a way to parse
+ the Date:
header.
+ A new function
+ ParseDate
+ allows parsing dates found in other
+ header lines, such as the Resent-Date:
header.
+
+ If an implementation of the
+ Auth.Start
+ method returns an empty toServer
value,
+ the package no longer sends
+ trailing whitespace in the SMTP AUTH
command,
+ which some servers rejected.
+
+ The new functions
+ PathEscape
+ and
+ PathUnescape
+ are similar to the query escaping and unescaping functions but
+ for path elements.
+
+ The new methods
+ URL.Hostname
+ and
+ URL.Port
+ return the hostname and port fields of a URL,
+ correctly handling the case where the port may not be present.
+
+ The existing method
+ URL.ResolveReference
+ now properly handles paths with escaped bytes without losing
+ the escaping.
+
+ The URL
type now implements
+ encoding.BinaryMarshaler
and
+ encoding.BinaryUnmarshaler
,
+ making it possible to process URLs in gob data.
+
+ Following RFC 3986,
+ Parse
+ now rejects URLs like this_that:other/thing
instead of
+ interpreting them as relative paths (this_that
is not a valid scheme).
+ To force interpretation as a relative path,
+ such URLs should be prefixed with “./
”.
+ The URL.String
method now inserts this prefix as needed.
+
+ The new function
+ Executable
returns
+ the path name of the running executable.
+
+ An attempt to call a method on
+ an os.File
that has
+ already been closed will now return the new error
+ value os.ErrClosed
.
+ Previously it returned a system-specific error such
+ as syscall.EBADF
.
+
+ On Unix systems, os.Rename
+ will now return an error when used to rename a directory to an
+ existing empty directory.
+ Previously it would fail when renaming to a non-empty directory
+ but succeed when renaming to an empty directory.
+ This makes the behavior on Unix correspond to that of other systems.
+
+ On Windows, long absolute paths are now transparently converted to
+ extended-length paths (paths that start with “\\?\
”).
+ This permits the package to work with files whose path names are
+ longer than 260 characters.
+
+ On Windows, os.IsExist
+ will now return true
for the system
+ error ERROR_DIR_NOT_EMPTY
.
+ This roughly corresponds to the existing handling of the Unix
+ error ENOTEMPTY
.
+
+ On Plan 9, files that are not served by #M
will now
+ have ModeDevice
set in
+ the value returned
+ by FileInfo.Mode
.
+
+ A number of bugs and corner cases on Windows were fixed:
+ Abs
now calls Clean
as documented,
+ Glob
now matches
+ “\\?\c:\*
”,
+ EvalSymlinks
now
+ correctly handles “C:.
”, and
+ Clean
now properly
+ handles a leading “..
” in the path.
+
+ The new function
+ Swapper
was
+ added to support sort.Slice
.
+
+ The Unquote
+ function now strips carriage returns (\r
) in
+ backquoted raw strings, following the
+ Go language semantics.
+
+ The Getpagesize
+ now returns the system's size, rather than a constant value.
+ Previously it always returned 4KB.
+
+ The signature
+ of Utimes
has
+ changed on Solaris to match all the other Unix systems'
+ signature. Portable code should continue to use
+ os.Chtimes
instead.
+
+ The X__cmsg_data
field has been removed from
+ Cmsghdr
.
+
+ Template.Execute
+ can now take a
+ reflect.Value
as its data
+ argument, and
+ FuncMap
+ functions can also accept and return reflect.Value
.
+
The new function
+ Until
complements
+ the analogous Since
function.
+
+ ParseDuration
+ now accepts long fractional parts.
+
+ Parse
+ now rejects dates before the start of a month, such as June 0;
+ it already rejected dates beyond the end of the month, such as
+ June 31 and July 32.
+
+ The tzdata
database has been updated to version
+ 2016j for systems that don't already have a local time zone
+ database.
+
+
+ The new method
+ T.Name
+ (and B.Name
) returns the name of the current
+ test or benchmark.
+
+ The new function
+ CoverMode
+ reports the test coverage mode.
+
+ Tests and benchmarks are now marked as failed if the race + detector is enabled and a data race occurs during execution. + Previously, individual test cases would appear to pass, + and only the overall execution of the test binary would fail. +
+ +
+ The signature of the
+ MainStart
+ function has changed, as allowed by the documentation. It is an
+ internal detail and not part of the Go 1 compatibility promise.
+ If you're not calling MainStart
directly but see
+ errors, that likely means you set the
+ normally-empty GOROOT
environment variable and it
+ doesn't match the version of your go
command's binary.
+
+ SimpleFold
+ now returns its argument unchanged if the provided input was an invalid rune.
+ Previously, the implementation failed with an index bounds check panic.
+
+ The latest Go release, version 1.9, arrives six months + after Go 1.8 and is the tenth release in + the Go 1.x + series. + There are two changes to the language: + adding support for type aliases and defining when implementations + may fuse floating point operations. + Most of the changes are in the implementation of the toolchain, + runtime, and libraries. + As always, the release maintains the Go 1 + promise of compatibility. + We expect almost all Go programs to continue to compile and run as + before. +
+ ++ The release + adds transparent monotonic time support, + parallelizes compilation of functions within a package, + better supports test helper functions, + includes a new bit manipulation package, + and has a new concurrent map type. +
+ ++ There are two changes to the language. +
++ Go now supports type aliases to support gradual code repair while + moving a type between packages. + The type alias + design document + and an + article on refactoring cover the problem in detail. + In short, a type alias declaration has the form: +
+ ++type T1 = T2 ++ +
+ This declaration introduces an alias name T1
—an
+ alternate spelling—for the type denoted by T2
; that is,
+ both T1
and T2
denote the same type.
+
+ A smaller language change is that the
+ language specification
+ now states when implementations are allowed to fuse floating
+ point operations together, such as by using an architecture's "fused
+ multiply and add" (FMA) instruction to compute x*y
+
z
+ without rounding the intermediate result x*y
.
+ To force the intermediate rounding, write float64(x*y)
+
z
.
+
+ There are no new supported operating systems or processor + architectures in this release. +
+ +
+ Both GOARCH=ppc64
and GOARCH=ppc64le
now
+ require at least POWER8 support. In previous releases,
+ only GOARCH=ppc64le
required POWER8 and the big
+ endian ppc64
architecture supported older
+ hardware.
+
+ +
+ Go 1.9 is the last release that will run on FreeBSD 9.3, + which is already + unsupported by FreeBSD. + Go 1.10 will require FreeBSD 10.3+. +
+ ++ Go 1.9 now enables PT_TLS generation for cgo binaries and thus + requires OpenBSD 6.0 or newer. Go 1.9 no longer supports + OpenBSD 5.9. +
+ +
+ There are some instabilities on FreeBSD that are known but not understood. + These can lead to program crashes in rare cases. + See issue 15658. + Any help in solving this FreeBSD-specific issue would be appreciated. +
+ ++ Go stopped running NetBSD builders during the Go 1.9 development + cycle due to NetBSD kernel crashes, up to and including NetBSD 7.1. + As Go 1.9 is being released, NetBSD 7.1.1 is being released with a fix. + However, at this time we have no NetBSD builders passing our test suite. + Any help investigating the + various NetBSD issues + would be appreciated. +
+ +
+ The Go compiler now supports compiling a package's functions in parallel, taking
+ advantage of multiple cores. This is in addition to the go
command's
+ existing support for parallel compilation of separate packages.
+ Parallel compilation is on by default, but it can be disabled by setting the
+ environment variable GO19CONCURRENTCOMPILATION
to 0
.
+
+ By popular request, ./...
no longer matches packages
+ in vendor
directories in tools accepting package names,
+ such as go
test
. To match vendor
+ directories, write ./vendor/...
.
+
+ The go tool will now use the path from which it
+ was invoked to attempt to locate the root of the Go install tree.
+ This means that if the entire Go installation is moved to a new
+ location, the go tool should continue to work as usual.
+ This may be overridden by setting GOROOT
in the environment,
+ which should only be done in unusual circumstances.
+ Note that this does not affect the result of
+ the runtime.GOROOT function, which
+ will continue to report the original installation location;
+ this may be fixed in later releases.
+
+ Complex division is now C99-compatible. This has always been the + case in gccgo and is now fixed in the gc toolchain. +
+ ++ The linker will now generate DWARF information for cgo executables on Windows. +
+ +
+ The compiler now includes lexical scopes in the generated DWARF if the
+ -N -l
flags are provided, allowing
+ debuggers to hide variables that are not in scope. The .debug_info
+ section is now DWARF version 4.
+
+ The values of GOARM
and GO386
now affect a
+ compiled package's build ID, as used by the go
tool's
+ dependency caching.
+
+ The four-operand ARM MULA
instruction is now assembled correctly,
+ with the addend register as the third argument and the result
+ register as the fourth and final argument.
+ In previous releases, the two meanings were reversed.
+ The three-operand form, in which the fourth argument is implicitly
+ the same as the third, is unaffected.
+ Code using four-operand MULA
instructions
+ will need to be updated, but we believe this form is very rarely used.
+ MULAWT
and MULAWB
were already
+ using the correct order in all forms and are unchanged.
+
+ The assembler now supports ADDSUBPS/PD
, completing the
+ two missing x86 SSE3 instructions.
+
+ Long lists of arguments are now truncated. This improves the readability
+ of go
doc
on some generated code.
+
+ Viewing documentation on struct fields is now supported.
+ For example, go
doc
http.Client.Jar
.
+
+ The new go
env
-json
flag
+ enables JSON output, instead of the default OS-specific output
+ format.
+
+ The go
test
+ command accepts a new -list
flag, which takes a regular
+ expression as an argument and prints to stdout the name of any
+ tests, benchmarks, or examples that match it, without running them.
+
+ Profiles produced by the runtime/pprof
package now
+ include symbol information, so they can be viewed
+ in go
tool
pprof
+ without the binary that produced the profile.
+
+ The go
tool
pprof
command now
+ uses the HTTP proxy information defined in the environment, using
+ http.ProxyFromEnvironment
.
+
+ The vet
command
+ has been better integrated into the
+ go
tool,
+ so go
vet
now supports all standard build
+ flags while vet
's own flags are now available
+ from go
vet
as well as
+ from go
tool
vet
.
+
+Due to the alignment of Go's semiannual release schedule with GCC's +annual release schedule, +GCC release 7 contains the Go 1.8.3 version of gccgo. +We expect that the next release, GCC 8, will contain the Go 1.10 +version of gccgo. +
+ +
+ Users of
+ runtime.Callers
+ should avoid directly inspecting the resulting PC slice and instead use
+ runtime.CallersFrames
+ to get a complete view of the call stack, or
+ runtime.Caller
+ to get information about a single caller.
+ This is because an individual element of the PC slice cannot account
+ for inlined frames or other nuances of the call stack.
+
+ Specifically, code that directly iterates over the PC slice and uses
+ functions such as
+ runtime.FuncForPC
+ to resolve each PC individually will miss inlined frames.
+ To get a complete view of the stack, such code should instead use
+ CallersFrames
.
+ Likewise, code should not assume that the length returned by
+ Callers
is any indication of the call depth.
+ It should instead count the number of frames returned by
+ CallersFrames
.
+
+ Code that queries a single caller at a specific depth should use
+ Caller
rather than passing a slice of length 1 to
+ Callers
.
+
+ runtime.CallersFrames
+ has been available since Go 1.7, so code can be updated prior to
+ upgrading to Go 1.9.
+
+ As always, the changes are so general and varied that precise + statements about performance are difficult to make. Most programs + should run a bit faster, due to speedups in the garbage collector, + better generated code, and optimizations in the core library. +
+ +
+ Library functions that used to trigger stop-the-world garbage
+ collection now trigger concurrent garbage collection.
+
+ Specifically, runtime.GC
,
+ debug.SetGCPercent
,
+ and
+ debug.FreeOSMemory
,
+ now trigger concurrent garbage collection, blocking only the calling
+ goroutine until the garbage collection is done.
+
+ The
+ debug.SetGCPercent
+ function only triggers a garbage collection if one is immediately
+ necessary because of the new GOGC value.
+ This makes it possible to adjust GOGC on-the-fly.
+
+ Large object allocation performance is significantly improved in + applications using large (>50GB) heaps containing many large + objects. +
+ +
+ The runtime.ReadMemStats
+ function now takes less than 100µs even for very large heaps.
+
+ The time
package now transparently
+ tracks monotonic time in each Time
+ value, making computing durations between two Time
values
+ a safe operation in the presence of wall clock adjustments.
+ See the package docs and
+ design document
+ for details.
+
+ Go 1.9 includes a new package,
+ math/bits
, with optimized
+ implementations for manipulating bits. On most architectures,
+ functions in this package are additionally recognized by the
+ compiler and treated as intrinsics for additional performance.
+
+ The
+ new (*T).Helper
+ and (*B).Helper
+ methods mark the calling function as a test helper function. When
+ printing file and line information, that function will be skipped.
+ This permits writing test helper functions while still having useful
+ line numbers for users.
+
+ The new Map
type
+ in the sync
package
+ is a concurrent map with amortized-constant-time loads, stores, and
+ deletes. It is safe for multiple goroutines to call a Map
's methods
+ concurrently.
+
+ The runtime/pprof
package
+ now supports adding labels to pprof
profiler records.
+ Labels form a key-value map that is used to distinguish calls of the
+ same function in different contexts when looking at profiles
+ with the pprof
command.
+ The pprof
package's
+ new Do
function
+ runs code associated with some provided labels. Other new functions
+ in the package help work with labels.
+
+ As always, there are various minor changes and updates to the library, + made with the Go 1 promise of compatibility + in mind. +
+ +
+ The
+ ZIP Writer
+ now sets the UTF-8 bit in
+ the FileHeader.Flags
+ when appropriate.
+
+ On Linux, Go now calls the getrandom
system call
+ without the GRND_NONBLOCK
flag; it will now block
+ until the kernel has sufficient randomness. On kernels predating
+ the getrandom
system call, Go continues to read
+ from /dev/urandom
.
+
+
+ On Unix systems the environment
+ variables SSL_CERT_FILE
+ and SSL_CERT_DIR
can now be used to override the
+ system default locations for the SSL certificate file and SSL
+ certificate files directory, respectively.
+
The FreeBSD file /usr/local/etc/ssl/cert.pem
is
+ now included in the certificate search path.
+
+
+ The package now supports excluded domains in name constraints.
+ In addition to enforcing such constraints,
+ CreateCertificate
+ will create certificates with excluded name constraints
+ if the provided template certificate has the new
+ field
+ ExcludedDNSDomains
+ populated.
+
+
+ If any SAN extension, including with no DNS names, is present
+ in the certificate, then the Common Name from
+ Subject
is ignored.
+ In previous releases, the code tested only whether DNS-name SANs were
+ present in a certificate.
+
+ The package will now use a cached Stmt
if
+ available in Tx.Stmt
.
+ This prevents statements from being re-prepared each time
+ Tx.Stmt
is called.
+
+ The package now allows drivers to implement their own argument checkers by implementing
+ driver.NamedValueChecker
.
+ This also allows drivers to support OUTPUT
and INOUT
parameter types.
+ Out
should be used to return output parameters
+ when supported by the driver.
+
+ Rows.Scan
can now scan user-defined string types.
+ Previously the package supported scanning into numeric types like type
Int
int64
. It now also supports
+ scanning into string types like type
String
string
.
+
+ The new DB.Conn
method returns the new
+ Conn
type representing an
+ exclusive connection to the database from the connection pool. All queries run on
+ a Conn
will use the same underlying
+ connection until Conn.Close
is called
+ to return the connection to the connection pool.
+
+ The new
+ NullBytes
+ and
+ NullRawValue
+ represent the ASN.1 NULL type.
+
+ The new Encoding.WithPadding + method adds support for custom padding characters and disabling padding. +
+ +
+ The new field
+ Reader.ReuseRecord
+ controls whether calls to
+ Read
+ may return a slice sharing the backing array of the previous
+ call's returned slice for improved performance.
+
+ The sharp flag ('#
') is now supported when printing
+ floating point and complex numbers. It will always print a
+ decimal point
+ for %e
, %E
, %f
, %F
, %g
+ and %G
; it will not remove trailing zeros
+ for %g
and %G
.
+
+ The package now includes 128-bit FNV-1 and FNV-1a hash support with
+ New128
and
+ New128a
, respectively.
+
+ The package now reports an error if a predefined escaper (one of + "html", "urlquery" and "js") is found in a pipeline and does not match + what the auto-escaper would have decided on its own. + This avoids certain security or correctness issues. + Now use of one of these escapers is always either a no-op or an error. + (The no-op case eases migration from text/template.) +
+ +
+ The Rectangle.Intersect
+ method now returns a zero Rectangle
when called on
+ adjacent but non-overlapping rectangles, as documented. In
+ earlier releases it would incorrectly return an empty but
+ non-zero Rectangle
.
+
+ The YCbCr to RGBA conversion formula has been tweaked to ensure + that rounding adjustments span the complete [0, 0xffff] RGBA + range. +
+ +
+ The new Encoder.BufferPool
+ field allows specifying an EncoderBufferPool
,
+ that will be used by the encoder to get temporary EncoderBuffer
+ buffers when encoding a PNG image.
+
+ The use of a BufferPool
reduces the number of
+ memory allocations performed while encoding multiple images.
+
+ The package now supports the decoding of transparent 8-bit + grayscale ("Gray8") images. +
+ +
+ The new
+ IsInt64
+ and
+ IsUint64
+ methods report whether an Int
+ may be represented as an int64
or uint64
+ value.
+
+ The new
+ FileHeader.Size
+ field describes the size of a file in a multipart message.
+
+ The new
+ Resolver.StrictErrors
+ provides control over how Go's built-in DNS resolver handles
+ temporary errors during queries composed of multiple sub-queries,
+ such as an A+AAAA address lookup.
+
+ The new
+ Resolver.Dial
+ allows a Resolver
to use a custom dial function.
+
+ JoinHostPort
now only places an address in square brackets if the host contains a colon.
+ In previous releases it would also wrap addresses in square brackets if they contained a percent ('%
') sign.
+
+ The new methods
+ TCPConn.SyscallConn
,
+ IPConn.SyscallConn
,
+ UDPConn.SyscallConn
,
+ and
+ UnixConn.SyscallConn
+ provide access to the connections' underlying file descriptors.
+
+ It is now safe to call Dial
with the address obtained from
+ (*TCPListener).String()
after creating the listener with
+ Listen("tcp", ":0")
.
+ Previously it failed on some machines with half-configured IPv6 stacks.
+
+ The Cookie.String
method, used for
+ Cookie
and Set-Cookie
headers, now encloses values in double quotes
+ if the value contains either a space or a comma.
+
Server changes:
+ServeMux
now ignores ports in the host
+ header when matching handlers. The host is matched unmodified for CONNECT
requests.
+ Server.ServeTLS
method wraps
+ Server.Serve
with added TLS support.
+ Server.WriteTimeout
+ now applies to HTTP/2 connections and is enforced per-stream.
+ StripPrefix
+ now calls its provided handler with a modified clone of the original *http.Request
.
+ Any code storing per-request state in maps keyed by *http.Request
should
+ use
+ Request.Context
,
+ Request.WithContext
,
+ and
+ context.WithValue
instead.
+ LocalAddrContextKey
now contains
+ the connection's actual network address instead of the interface address used by the listener.
+ Client & Transport changes:
+Transport
+ now supports making requests via SOCKS5 proxy when the URL returned by
+ Transport.Proxy
+ has the scheme socks5
.
+
+ The new
+ ProcessEnv
+ function returns FastCGI environment variables associated with an HTTP request
+ for which there are no appropriate
+ http.Request
+ fields, such as REMOTE_USER
.
+
+ The new
+ Server.Client
+ method returns an HTTP client configured for making requests to the test server.
+
+ The new
+ Server.Certificate
+ method returns the test server's TLS certificate, if any.
+
+ The ReverseProxy
+ now proxies all HTTP/2 response trailers, even those not declared in the initial response
+ header. Such undeclared trailers are used by the gRPC protocol.
+
+ The os
package now uses the internal runtime poller
+ for file I/O.
+ This reduces the number of threads required for read/write
+ operations on pipes, and it eliminates races when one goroutine
+ closes a file while another is using the file for I/O.
+
+ On Windows,
+ Args
+ is now populated without shell32.dll
, improving process start-up time by 1-7 ms.
+
+ The os/exec
package now prevents child processes from being created with
+ any duplicate environment variables.
+ If Cmd.Env
+ contains duplicate environment keys, only the last
+ value in the slice for each duplicate key is used.
+
+ Lookup
and
+ LookupId
now
+ work on Unix systems when CGO_ENABLED=0
by reading
+ the /etc/passwd
file.
+
+ LookupGroup
and
+ LookupGroupId
now
+ work on Unix systems when CGO_ENABLED=0
by reading
+ the /etc/group
file.
+
+ The new
+ MakeMapWithSize
+ function creates a map with a capacity hint.
+
+ Tracebacks generated by the runtime and recorded in profiles are
+ now accurate in the presence of inlining.
+ To retrieve tracebacks programmatically, applications should use
+ runtime.CallersFrames
+ rather than directly iterating over the results of
+ runtime.Callers
.
+
+ On Windows, Go no longer forces the system timer to run at high + resolution when the program is idle. + This should reduce the impact of Go programs on battery life. +
+ +
+ On FreeBSD, GOMAXPROCS
and
+ runtime.NumCPU
+ are now based on the process' CPU mask, rather than the total
+ number of CPUs.
+
+ The runtime has preliminary support for Android O. +
+ +
+ Calling
+ SetGCPercent
+ with a negative value no longer runs an immediate garbage collection.
+
+ The execution trace now displays mark assist events, which + indicate when an application goroutine is forced to assist + garbage collection because it is allocating too quickly. +
+ ++ "Sweep" events now encompass the entire process of finding free + space for an allocation, rather than recording each individual + span that is swept. + This reduces allocation latency when tracing allocation-heavy + programs. + The sweep event shows how many bytes were swept and how many + were reclaimed. +
+ +
+ Mutex
is now more fair.
+
+ The new field
+ Credential.NoSetGroups
+ controls whether Unix systems make a setgroups
system call
+ to set supplementary groups when starting a new process.
+
+ The new field
+ SysProcAttr.AmbientCaps
+ allows setting ambient capabilities on Linux 4.3+ when creating
+ a new process.
+
+ On 64-bit x86 Linux, process creation latency has been optimized with
+ use of CLONE_VFORK
and CLONE_VM
.
+
+ The new
+ Conn
+ interface describes some types in the
+ net
+ package that can provide access to their underlying file descriptor
+ using the new
+ RawConn
+ interface.
+
+ The package now chooses values in the full range when
+ generating int64
and uint64
random
+ numbers; in earlier releases generated values were always
+ limited to the [-262, 262) range.
+
+ In previous releases, using a nil
+ Config.Rand
+ value caused a fixed deterministic random number generator to be used.
+ It now uses a random number generator seeded with the current time.
+ For the old behavior, set Config.Rand
to rand.New(rand.NewSource(0))
.
+
+ The handling of empty blocks, which was broken by a Go 1.8 + change that made the result dependent on the order of templates, + has been fixed, restoring the old Go 1.7 behavior. +
+ +
+ The new methods
+ Duration.Round
+ and
+ Duration.Truncate
+ handle rounding and truncating durations to multiples of a given duration.
+
+ Retrieving the time and sleeping now work correctly under Wine. +
+ +
+ If a Time
value has a monotonic clock reading, its
+ string representation (as returned by String
) now includes a
+ final field "m=±value"
, where value
is the
+ monotonic clock reading formatted as a decimal number of seconds.
+
+ The included tzdata
timezone database has been
+ updated to version 2017b. As always, it is only used if the
+ system does not already have the database available.
+
+Go version 1, Go 1 for short, defines a language and a set of core libraries +that provide a stable foundation for creating reliable products, projects, and +publications. +
+ ++The driving motivation for Go 1 is stability for its users. People should be able to +write Go programs and expect that they will continue to compile and run without +change, on a time scale of years, including in production environments such as +Google App Engine. Similarly, people should be able to write books about Go, be +able to say which version of Go the book is describing, and have that version +number still be meaningful much later. +
+ ++Code that compiles in Go 1 should, with few exceptions, continue to compile and +run throughout the lifetime of that version, even as we issue updates and bug +fixes such as Go version 1.1, 1.2, and so on. Other than critical fixes, changes +made to the language and library for subsequent releases of Go 1 may +add functionality but will not break existing Go 1 programs. +The Go 1 compatibility document +explains the compatibility guidelines in more detail. +
+ +
+Go 1 is a representation of Go as it used today, not a wholesale rethinking of
+the language. We avoided designing new features and instead focused on cleaning
+up problems and inconsistencies and improving portability. There are a number
+changes to the Go language and packages that we had considered for some time and
+prototyped but not released primarily because they are significant and
+backwards-incompatible. Go 1 was an opportunity to get them out, which is
+helpful for the long term, but also means that Go 1 introduces incompatibilities
+for old programs. Fortunately, the go
fix
tool can
+automate much of the work needed to bring programs up to the Go 1 standard.
+
+This document outlines the major changes in Go 1 that will affect programmers +updating existing code; its reference point is the prior release, r60 (tagged as +r60.3). It also explains how to update code from r60 to run under Go 1. +
+ +
+The append
predeclared variadic function makes it easy to grow a slice
+by adding elements to the end.
+A common use is to add bytes to the end of a byte slice when generating output.
+However, append
did not provide a way to append a string to a []byte
,
+which is another common case.
+
+By analogy with the similar property of copy
, Go 1
+permits a string to be appended (byte-wise) directly to a byte
+slice, reducing the friction between strings and byte slices.
+The conversion is no longer necessary:
+
+Updating: +This is a new feature, so existing code needs no changes. +
+ +
+The close
predeclared function provides a mechanism
+for a sender to signal that no more values will be sent.
+It is important to the implementation of for
range
+loops over channels and is helpful in other situations.
+Partly by design and partly because of race conditions that can occur otherwise,
+it is intended for use only by the goroutine sending on the channel,
+not by the goroutine receiving data.
+However, before Go 1 there was no compile-time checking that close
+was being used correctly.
+
+To close this gap, at least in part, Go 1 disallows close
on receive-only channels.
+Attempting to close such a channel is a compile-time error.
+
+ var c chan int + var csend chan<- int = c + var crecv <-chan int = c + close(c) // legal + close(csend) // legal + close(crecv) // illegal ++ +
+Updating: +Existing code that attempts to close a receive-only channel was +erroneous even before Go 1 and should be fixed. The compiler will +now reject such code. +
+ ++In Go 1, a composite literal of array, slice, or map type can elide the +type specification for the elements' initializers if they are of pointer type. +All four of the initializations in this example are legal; the last one was illegal before Go 1. +
+ +{{code "/doc/progs/go1.go" `/type Date struct/` `/STOP/`}} + +
+Updating:
+This change has no effect on existing code, but the command
+gofmt
-s
applied to existing source
+will, among other things, elide explicit element types wherever permitted.
+
+The old language defined that go
statements executed during initialization created goroutines but that they did not begin to run until initialization of the entire program was complete.
+This introduced clumsiness in many places and, in effect, limited the utility
+of the init
construct:
+if it was possible for another package to use the library during initialization, the library
+was forced to avoid goroutines.
+This design was done for reasons of simplicity and safety but,
+as our confidence in the language grew, it seemed unnecessary.
+Running goroutines during initialization is no more complex or unsafe than running them during normal execution.
+
+In Go 1, code that uses goroutines can be called from
+init
routines and global initialization expressions
+without introducing a deadlock.
+
+Updating:
+This is a new feature, so existing code needs no changes,
+although it's possible that code that depends on goroutines not starting before main
will break.
+There was no such code in the standard repository.
+
+The language spec allows the int
type to be 32 or 64 bits wide, but current implementations set int
to 32 bits even on 64-bit platforms.
+It would be preferable to have int
be 64 bits on 64-bit platforms.
+(There are important consequences for indexing large slices.)
+However, this change would waste space when processing Unicode characters with
+the old language because the int
type was also used to hold Unicode code points: each code point would waste an extra 32 bits of storage if int
grew from 32 bits to 64.
+
+To make changing to 64-bit int
feasible,
+Go 1 introduces a new basic type, rune
, to represent
+individual Unicode code points.
+It is an alias for int32
, analogous to byte
+as an alias for uint8
.
+
+Character literals such as 'a'
, '語'
, and '\u0345'
+now have default type rune
,
+analogous to 1.0
having default type float64
.
+A variable initialized to a character constant will therefore
+have type rune
unless otherwise specified.
+
+Libraries have been updated to use rune
rather than int
+when appropriate. For instance, the functions unicode.ToLower
and
+relatives now take and return a rune
.
+
+Updating:
+Most source code will be unaffected by this because the type inference from
+:=
initializers introduces the new type silently, and it propagates
+from there.
+Some code may get type errors that a trivial conversion will resolve.
+
+Go 1 introduces a new built-in type, error
, which has the following definition:
+
+ type error interface { + Error() string + } ++ +
+Since the consequences of this type are all in the package library, +it is discussed below. +
+ +
+In the old language, to delete the entry with key k
from map m
, one wrote the statement,
+
+ m[k] = value, false ++ +
+This syntax was a peculiar special case, the only two-to-one assignment.
+It required passing a value (usually ignored) that is evaluated but discarded,
+plus a boolean that was nearly always the constant false
.
+It did the job but was odd and a point of contention.
+
+In Go 1, that syntax has gone; instead there is a new built-in
+function, delete
. The call
+
+will delete the map entry retrieved by the expression m[k]
.
+There is no return value. Deleting a non-existent entry is a no-op.
+
+Updating:
+Running go
fix
will convert expressions of the form m[k] = value,
+false
into delete(m, k)
when it is clear that
+the ignored value can be safely discarded from the program and
+false
refers to the predefined boolean constant.
+The fix tool
+will flag other uses of the syntax for inspection by the programmer.
+
+The old language specification did not define the order of iteration for maps, +and in practice it differed across hardware platforms. +This caused tests that iterated over maps to be fragile and non-portable, with the +unpleasant property that a test might always pass on one machine but break on another. +
+ +
+In Go 1, the order in which elements are visited when iterating
+over a map using a for
range
statement
+is defined to be unpredictable, even if the same loop is run multiple
+times with the same map.
+Code should not assume that the elements are visited in any particular order.
+
+This change means that code that depends on iteration order is very likely to break early and be fixed long before it becomes a problem. +Just as important, it allows the map implementation to ensure better map balancing even when programs are using range loops to select an element from a map. +
+ +{{code "/doc/progs/go1.go" `/Sunday/` `/^ }/`}} + ++Updating: +This is one change where tools cannot help. Most existing code +will be unaffected, but some programs may break or misbehave; we +recommend manual checking of all range statements over maps to +verify they do not depend on iteration order. There were a few such +examples in the standard repository; they have been fixed. +Note that it was already incorrect to depend on the iteration order, which +was unspecified. This change codifies the unpredictability. +
+ ++The language specification has long guaranteed that in assignments +the right-hand-side expressions are all evaluated before any left-hand-side expressions are assigned. +To guarantee predictable behavior, +Go 1 refines the specification further. +
+ ++If the left-hand side of the assignment +statement contains expressions that require evaluation, such as +function calls or array indexing operations, these will all be done +using the usual left-to-right rule before any variables are assigned +their value. Once everything is evaluated, the actual assignments +proceed in left-to-right order. +
+ ++These examples illustrate the behavior. +
+ +{{code "/doc/progs/go1.go" `/sa :=/` `/then sc.0. = 2/`}} + ++Updating: +This is one change where tools cannot help, but breakage is unlikely. +No code in the standard repository was broken by this change, and code +that depended on the previous unspecified behavior was already incorrect. +
+ +
+A common mistake is to use return
(without arguments) after an assignment to a variable that has the same name as a result variable but is not the same variable.
+This situation is called shadowing: the result variable has been shadowed by another variable with the same name declared in an inner scope.
+
+In functions with named return values, +the Go 1 compilers disallow return statements without arguments if any of the named return values is shadowed at the point of the return statement. +(It isn't part of the specification, because this is one area we are still exploring; +the situation is analogous to the compilers rejecting functions that do not end with an explicit return statement.) +
+ ++This function implicitly returns a shadowed return value and will be rejected by the compiler: +
+ ++ func Bug() (i, j, k int) { + for i = 0; i < 5; i++ { + for j := 0; j < 5; j++ { // Redeclares j. + k += i*j + if k > 100 { + return // Rejected: j is shadowed here. + } + } + } + return // OK: j is not shadowed here. + } ++ +
+Updating: +Code that shadows return values in this way will be rejected by the compiler and will need to be fixed by hand. +The few cases that arose in the standard repository were mostly bugs. +
+ +
+The old language did not allow a package to make a copy of a struct value containing unexported fields belonging to a different package.
+There was, however, a required exception for a method receiver;
+also, the implementations of copy
and append
have never honored the restriction.
+
+Go 1 will allow packages to copy struct values containing unexported fields from other packages.
+Besides resolving the inconsistency,
+this change admits a new kind of API: a package can return an opaque value without resorting to a pointer or interface.
+The new implementations of time.Time
and
+reflect.Value
are examples of types taking advantage of this new property.
+
+As an example, if package p
includes the definitions,
+
+ type Struct struct { + Public int + secret int + } + func NewStruct(a int) Struct { // Note: not a pointer. + return Struct{a, f(a)} + } + func (s Struct) String() string { + return fmt.Sprintf("{%d (secret %d)}", s.Public, s.secret) + } ++ +
+a package that imports p
can assign and copy values of type
+p.Struct
at will.
+Behind the scenes the unexported fields will be assigned and copied just
+as if they were exported,
+but the client code will never be aware of them. The code
+
+ import "p" + + myStruct := p.NewStruct(23) + copyOfMyStruct := myStruct + fmt.Println(myStruct, copyOfMyStruct) ++ +
+will show that the secret field of the struct has been copied to the new value. +
+ ++Updating: +This is a new feature, so existing code needs no changes. +
+ ++Before Go 1, the language did not define equality on struct and array values. +This meant, +among other things, that structs and arrays could not be used as map keys. +On the other hand, Go did define equality on function and map values. +Function equality was problematic in the presence of closures +(when are two closures equal?) +while map equality compared pointers, not the maps' content, which was usually +not what the user would want. +
+ +
+Go 1 addressed these issues.
+First, structs and arrays can be compared for equality and inequality
+(==
and !=
),
+and therefore be used as map keys,
+provided they are composed from elements for which equality is also defined,
+using element-wise comparison.
+
+Second, Go 1 removes the definition of equality for function values,
+except for comparison with nil
.
+Finally, map equality is gone too, also except for comparison with nil
.
+
+Note that equality is still undefined for slices, for which the
+calculation is in general infeasible. Also note that the ordered
+comparison operators (<
<=
+>
>=
) are still undefined for
+structs and arrays.
+
+
+Updating: +Struct and array equality is a new feature, so existing code needs no changes. +Existing code that depends on function or map equality will be +rejected by the compiler and will need to be fixed by hand. +Few programs will be affected, but the fix may require some +redesign. +
+ ++Go 1 addresses many deficiencies in the old standard library and +cleans up a number of packages, making them more internally consistent +and portable. +
+ ++This section describes how the packages have been rearranged in Go 1. +Some have moved, some have been renamed, some have been deleted. +New packages are described in later sections. +
+ +
+Go 1 has a rearranged package hierarchy that groups related items
+into subdirectories. For instance, utf8
and
+utf16
now occupy subdirectories of unicode
.
+Also, some packages have moved into
+subrepositories of
+code.google.com/p/go
+while others have been deleted outright.
+
Old path | +New path | +
---|---|
asn1 | encoding/asn1 |
csv | encoding/csv |
gob | encoding/gob |
json | encoding/json |
xml | encoding/xml |
exp/template/html | html/template |
big | math/big |
cmath | math/cmplx |
rand | math/rand |
http | net/http |
http/cgi | net/http/cgi |
http/fcgi | net/http/fcgi |
http/httptest | net/http/httptest |
http/pprof | net/http/pprof |
net/mail | |
rpc | net/rpc |
rpc/jsonrpc | net/rpc/jsonrpc |
smtp | net/smtp |
url | net/url |
exec | os/exec |
scanner | text/scanner |
tabwriter | text/tabwriter |
template | text/template |
template/parse | text/template/parse |
utf8 | unicode/utf8 |
utf16 | unicode/utf16 |
+Note that the package names for the old cmath
and
+exp/template/html
packages have changed to cmplx
+and template
.
+
+Updating:
+Running go
fix
will update all imports and package renames for packages that
+remain inside the standard repository. Programs that import packages
+that are no longer in the standard repository will need to be edited
+by hand.
+
+Because they are not standardized, the packages under the exp
directory will not be available in the
+standard Go 1 release distributions, although they will be available in source code form
+in the repository for
+developers who wish to use them.
+
+Several packages have moved under exp
at the time of Go 1's release:
+
ebnf
html
†go/types
+(†The EscapeString
and UnescapeString
types remain
+in package html
.)
+
+All these packages are available under the same names, with the prefix exp/
: exp/ebnf
etc.
+
+Also, the utf8.String
type has been moved to its own package, exp/utf8string
.
+
+Finally, the gotype
command now resides in exp/gotype
, while
+ebnflint
is now in exp/ebnflint
.
+If they are installed, they now reside in $GOROOT/bin/tool
.
+
+Updating:
+Code that uses packages in exp
will need to be updated by hand,
+or else compiled from an installation that has exp
available.
+The go
fix
tool or the compiler will complain about such uses.
+
+Because they are deprecated, the packages under the old
directory will not be available in the
+standard Go 1 release distributions, although they will be available in source code form for
+developers who wish to use them.
+
+The packages in their new locations are: +
+ +old/netchan
+Updating:
+Code that uses packages now in old
will need to be updated by hand,
+or else compiled from an installation that has old
available.
+The go
fix
tool will warn about such uses.
+
+Go 1 deletes several packages outright: +
+ +container/vector
exp/datafmt
go/typechecker
old/regexp
old/template
try
+and also the command gotry
.
+
+Updating:
+Code that uses container/vector
should be updated to use
+slices directly. See
+the Go
+Language Community Wiki for some suggestions.
+Code that uses the other packages (there should be almost zero) will need to be rethought.
+
+Go 1 has moved a number of packages into other repositories, usually sub-repositories of +the main Go repository. +This table lists the old and new import paths: + +
Old | +New | +
---|---|
crypto/bcrypt | code.google.com/p/go.crypto/bcrypt |
crypto/blowfish | code.google.com/p/go.crypto/blowfish |
crypto/cast5 | code.google.com/p/go.crypto/cast5 |
crypto/md4 | code.google.com/p/go.crypto/md4 |
crypto/ocsp | code.google.com/p/go.crypto/ocsp |
crypto/openpgp | code.google.com/p/go.crypto/openpgp |
crypto/openpgp/armor | code.google.com/p/go.crypto/openpgp/armor |
crypto/openpgp/elgamal | code.google.com/p/go.crypto/openpgp/elgamal |
crypto/openpgp/errors | code.google.com/p/go.crypto/openpgp/errors |
crypto/openpgp/packet | code.google.com/p/go.crypto/openpgp/packet |
crypto/openpgp/s2k | code.google.com/p/go.crypto/openpgp/s2k |
crypto/ripemd160 | code.google.com/p/go.crypto/ripemd160 |
crypto/twofish | code.google.com/p/go.crypto/twofish |
crypto/xtea | code.google.com/p/go.crypto/xtea |
exp/ssh | code.google.com/p/go.crypto/ssh |
image/bmp | code.google.com/p/go.image/bmp |
image/tiff | code.google.com/p/go.image/tiff |
net/dict | code.google.com/p/go.net/dict |
net/websocket | code.google.com/p/go.net/websocket |
exp/spdy | code.google.com/p/go.net/spdy |
encoding/git85 | code.google.com/p/go.codereview/git85 |
patch | code.google.com/p/go.codereview/patch |
exp/wingui | code.google.com/p/gowingui |
+Updating:
+Running go
fix
will update imports of these packages to use the new import paths.
+Installations that depend on these packages will need to install them using
+a go get
command.
+
+This section describes significant changes to the core libraries, the ones that +affect the most programs. +
+ +
+The placement of os.Error
in package os
is mostly historical: errors first came up when implementing package os
, and they seemed system-related at the time.
+Since then it has become clear that errors are more fundamental than the operating system. For example, it would be nice to use Errors
in packages that os
depends on, like syscall
.
+Also, having Error
in os
introduces many dependencies on os
that would otherwise not exist.
+
+Go 1 solves these problems by introducing a built-in error
interface type and a separate errors
package (analogous to bytes
and strings
) that contains utility functions.
+It replaces os.NewError
with
+errors.New
,
+giving errors a more central place in the environment.
+
+So the widely-used String
method does not cause accidental satisfaction
+of the error
interface, the error
interface uses instead
+the name Error
for that method:
+
+ type error interface { + Error() string + } ++ +
+The fmt
library automatically invokes Error
, as it already
+does for String
, for easy printing of error values.
+
+All standard packages have been updated to use the new interface; the old os.Error
is gone.
+
+A new package, errors
, contains the function
+
+func New(text string) error ++ +
+to turn a string into an error. It replaces the old os.NewError
.
+
+Updating:
+Running go
fix
will update almost all code affected by the change.
+Code that defines error types with a String
method will need to be updated
+by hand to rename the methods to Error
.
+
+The old syscall
package, which predated os.Error
+(and just about everything else),
+returned errors as int
values.
+In turn, the os
package forwarded many of these errors, such
+as EINVAL
, but using a different set of errors on each platform.
+This behavior was unpleasant and unportable.
+
+In Go 1, the
+syscall
+package instead returns an error
for system call errors.
+On Unix, the implementation is done by a
+syscall.Errno
type
+that satisfies error
and replaces the old os.Errno
.
+
+The changes affecting os.EINVAL
and relatives are
+described elsewhere.
+
+
+Updating:
+Running go
fix
will update almost all code affected by the change.
+Regardless, most code should use the os
package
+rather than syscall
and so will be unaffected.
+
+Time is always a challenge to support well in a programming language.
+The old Go time
package had int64
units, no
+real type safety,
+and no distinction between absolute times and durations.
+
+One of the most sweeping changes in the Go 1 library is therefore a
+complete redesign of the
+time
package.
+Instead of an integer number of nanoseconds as an int64
,
+and a separate *time.Time
type to deal with human
+units such as hours and years,
+there are now two fundamental types:
+time.Time
+(a value, so the *
is gone), which represents a moment in time;
+and time.Duration
,
+which represents an interval.
+Both have nanosecond resolution.
+A Time
can represent any time into the ancient
+past and remote future, while a Duration
can
+span plus or minus only about 290 years.
+There are methods on these types, plus a number of helpful
+predefined constant durations such as time.Second
.
+
+Among the new methods are things like
+Time.Add
,
+which adds a Duration
to a Time
, and
+Time.Sub
,
+which subtracts two Times
to yield a Duration
.
+
+The most important semantic change is that the Unix epoch (Jan 1, 1970) is now
+relevant only for those functions and methods that mention Unix:
+time.Unix
+and the Unix
+and UnixNano
methods
+of the Time
type.
+In particular,
+time.Now
+returns a time.Time
value rather than, in the old
+API, an integer nanosecond count since the Unix epoch.
+
+The new types, methods, and constants have been propagated through
+all the standard packages that use time, such as os
and
+its representation of file time stamps.
+
+Updating:
+The go
fix
tool will update many uses of the old time
package to use the new
+types and methods, although it does not replace values such as 1e9
+representing nanoseconds per second.
+Also, because of type changes in some of the values that arise,
+some of the expressions rewritten by the fix tool may require
+further hand editing; in such cases the rewrite will include
+the correct function or method for the old functionality, but
+may have the wrong type or require further analysis.
+
+This section describes smaller changes, such as those to less commonly
+used packages or that affect
+few programs beyond the need to run go
fix
.
+This category includes packages that are new in Go 1.
+Collectively they improve portability, regularize behavior, and
+make the interfaces more modern and Go-like.
+
+In Go 1, *zip.Writer
no
+longer has a Write
method. Its presence was a mistake.
+
+Updating: +What little code is affected will be caught by the compiler and must be updated by hand. +
+ +
+In Go 1, bufio.NewReaderSize
+and
+bufio.NewWriterSize
+functions no longer return an error for invalid sizes.
+If the argument size is too small or invalid, it is adjusted.
+
+Updating:
+Running go
fix
will update calls that assign the error to _.
+Calls that aren't fixed will be caught by the compiler and must be updated by hand.
+
+In Go 1, the NewWriterXxx
functions in
+compress/flate
,
+compress/gzip
and
+compress/zlib
+all return (*Writer, error)
if they take a compression level,
+and *Writer
otherwise. Package gzip
's
+Compressor
and Decompressor
types have been renamed
+to Writer
and Reader
. Package flate
's
+WrongValueError
type has been removed.
+
+Updating
+Running go
fix
will update old names and calls that assign the error to _.
+Calls that aren't fixed will be caught by the compiler and must be updated by hand.
+
+In Go 1, the Reset
method has been removed. Go does not guarantee
+that memory is not copied and therefore this method was misleading.
+
+The cipher-specific types *aes.Cipher
, *des.Cipher
,
+and *des.TripleDESCipher
have been removed in favor of
+cipher.Block
.
+
+Updating: +Remove the calls to Reset. Replace uses of the specific cipher types with +cipher.Block. +
+ +
+In Go 1, elliptic.Curve
+has been made an interface to permit alternative implementations. The curve
+parameters have been moved to the
+elliptic.CurveParams
+structure.
+
+Updating:
+Existing users of *elliptic.Curve
will need to change to
+simply elliptic.Curve
. Calls to Marshal
,
+Unmarshal
and GenerateKey
are now functions
+in crypto/elliptic
that take an elliptic.Curve
+as their first argument.
+
+In Go 1, the hash-specific functions, such as hmac.NewMD5
, have
+been removed from crypto/hmac
. Instead, hmac.New
takes
+a function that returns a hash.Hash
, such as md5.New
.
+
+Updating:
+Running go
fix
will perform the needed changes.
+
+In Go 1, the
+CreateCertificate
+function and
+CreateCRL
+method in crypto/x509
have been altered to take an
+interface{}
where they previously took a *rsa.PublicKey
+or *rsa.PrivateKey
. This will allow other public key algorithms
+to be implemented in the future.
+
+Updating: +No changes will be needed. +
+ +
+In Go 1, the binary.TotalSize
function has been replaced by
+Size
,
+which takes an interface{}
argument rather than
+a reflect.Value
.
+
+Updating: +What little code is affected will be caught by the compiler and must be updated by hand. +
+ +
+In Go 1, the xml
package
+has been brought closer in design to the other marshaling packages such
+as encoding/gob
.
+
+The old Parser
type is renamed
+Decoder
and has a new
+Decode
method. An
+Encoder
type was also introduced.
+
+The functions Marshal
+and Unmarshal
+work with []byte
values now. To work with streams,
+use the new Encoder
+and Decoder
types.
+
+When marshaling or unmarshaling values, the format of supported flags in
+field tags has changed to be closer to the
+json
package
+(`xml:"name,flag"`
). The matching done between field tags, field
+names, and the XML attribute and element names is now case-sensitive.
+The XMLName
field tag, if present, must also match the name
+of the XML element being marshaled.
+
+Updating:
+Running go
fix
will update most uses of the package except for some calls to
+Unmarshal
. Special care must be taken with field tags,
+since the fix tool will not update them and if not fixed by hand they will
+misbehave silently in some cases. For example, the old
+"attr"
is now written ",attr"
while plain
+"attr"
remains valid but with a different meaning.
+
+In Go 1, the RemoveAll
function has been removed.
+The Iter
function and Iter method on *Map
have
+been replaced by
+Do
+and
+(*Map).Do
.
+
+Updating:
+Most code using expvar
will not need changing. The rare code that used
+Iter
can be updated to pass a closure to Do
to achieve the same effect.
+
+In Go 1, the interface flag.Value
has changed slightly.
+The Set
method now returns an error
instead of
+a bool
to indicate success or failure.
+
+There is also a new kind of flag, Duration
, to support argument
+values specifying time intervals.
+Values for such flags must be given units, just as time.Duration
+formats them: 10s
, 1h30m
, etc.
+
+Updating:
+Programs that implement their own flags will need minor manual fixes to update their
+Set
methods.
+The Duration
flag is new and affects no existing code.
+
+Several packages under go
have slightly revised APIs.
+
+A concrete Mode
type was introduced for configuration mode flags
+in the packages
+go/scanner
,
+go/parser
,
+go/printer
, and
+go/doc
.
+
+The modes AllowIllegalChars
and InsertSemis
have been removed
+from the go/scanner
package. They were mostly
+useful for scanning text other then Go source files. Instead, the
+text/scanner
package should be used
+for that purpose.
+
+The ErrorHandler
provided
+to the scanner's Init
method is
+now simply a function rather than an interface. The ErrorVector
type has
+been removed in favor of the (existing) ErrorList
+type, and the ErrorVector
methods have been migrated. Instead of embedding
+an ErrorVector
in a client of the scanner, now a client should maintain
+an ErrorList
.
+
+The set of parse functions provided by the go/parser
+package has been reduced to the primary parse function
+ParseFile
, and a couple of
+convenience functions ParseDir
+and ParseExpr
.
+
+The go/printer
package supports an additional
+configuration mode SourcePos
;
+if set, the printer will emit //line
comments such that the generated
+output contains the original source code position information. The new type
+CommentedNode
can be
+used to provide comments associated with an arbitrary
+ast.Node
(until now only
+ast.File
carried comment information).
+
+The type names of the go/doc
package have been
+streamlined by removing the Doc
suffix: PackageDoc
+is now Package
, ValueDoc
is Value
, etc.
+Also, all types now consistently have a Name
field (or Names
,
+in the case of type Value
) and Type.Factories
has become
+Type.Funcs
.
+Instead of calling doc.NewPackageDoc(pkg, importpath)
,
+documentation for a package is created with:
+
+ doc.New(pkg, importpath, mode) ++ +
+where the new mode
parameter specifies the operation mode:
+if set to AllDecls
, all declarations
+(not just exported ones) are considered.
+The function NewFileDoc
was removed, and the function
+CommentText
has become the method
+Text
of
+ast.CommentGroup
.
+
+In package go/token
, the
+token.FileSet
method Files
+(which originally returned a channel of *token.File
s) has been replaced
+with the iterator Iterate
that
+accepts a function argument instead.
+
+In package go/build
, the API
+has been nearly completely replaced.
+The package still computes Go package information
+but it does not run the build: the Cmd
and Script
+types are gone.
+(To build code, use the new
+go
command instead.)
+The DirInfo
type is now named
+Package
.
+FindTree
and ScanDir
are replaced by
+Import
+and
+ImportDir
.
+
+Updating:
+Code that uses packages in go
will have to be updated by hand; the
+compiler will reject incorrect uses. Templates used in conjunction with any of the
+go/doc
types may need manual fixes; the renamed fields will lead
+to run-time errors.
+
+In Go 1, the definition of hash.Hash
includes
+a new method, BlockSize
. This new method is used primarily in the
+cryptographic libraries.
+
+The Sum
method of the
+hash.Hash
interface now takes a
+[]byte
argument, to which the hash value will be appended.
+The previous behavior can be recreated by adding a nil
argument to the call.
+
+Updating:
+Existing implementations of hash.Hash
will need to add a
+BlockSize
method. Hashes that process the input one byte at
+a time can implement BlockSize
to return 1.
+Running go
fix
will update calls to the Sum
methods of the various
+implementations of hash.Hash
.
+
+Updating: +Since the package's functionality is new, no updating is necessary. +
+ +
+In Go 1 the http
package is refactored,
+putting some of the utilities into a
+httputil
subdirectory.
+These pieces are only rarely needed by HTTP clients.
+The affected items are:
+
+The Request.RawURL
field has been removed; it was a
+historical artifact.
+
+The Handle
and HandleFunc
+functions, and the similarly-named methods of ServeMux
,
+now panic if an attempt is made to register the same pattern twice.
+
+Updating:
+Running go
fix
will update the few programs that are affected except for
+uses of RawURL
, which must be fixed by hand.
+
+The image
package has had a number of
+minor changes, rearrangements and renamings.
+
+Most of the color handling code has been moved into its own package,
+image/color
.
+For the elements that moved, a symmetry arises; for instance,
+each pixel of an
+image.RGBA
+is a
+color.RGBA
.
+
+The old image/ycbcr
package has been folded, with some
+renamings, into the
+image
+and
+image/color
+packages.
+
+The old image.ColorImage
type is still in the image
+package but has been renamed
+image.Uniform
,
+while image.Tiled
has been removed.
+
+This table lists the renamings. +
+ +Old | +New | +
---|---|
image.Color | color.Color |
image.ColorModel | color.Model |
image.ColorModelFunc | color.ModelFunc |
image.PalettedColorModel | color.Palette |
image.RGBAColor | color.RGBA |
image.RGBA64Color | color.RGBA64 |
image.NRGBAColor | color.NRGBA |
image.NRGBA64Color | color.NRGBA64 |
image.AlphaColor | color.Alpha |
image.Alpha16Color | color.Alpha16 |
image.GrayColor | color.Gray |
image.Gray16Color | color.Gray16 |
image.RGBAColorModel | color.RGBAModel |
image.RGBA64ColorModel | color.RGBA64Model |
image.NRGBAColorModel | color.NRGBAModel |
image.NRGBA64ColorModel | color.NRGBA64Model |
image.AlphaColorModel | color.AlphaModel |
image.Alpha16ColorModel | color.Alpha16Model |
image.GrayColorModel | color.GrayModel |
image.Gray16ColorModel | color.Gray16Model |
ycbcr.RGBToYCbCr | color.RGBToYCbCr |
ycbcr.YCbCrToRGB | color.YCbCrToRGB |
ycbcr.YCbCrColorModel | color.YCbCrModel |
ycbcr.YCbCrColor | color.YCbCr |
ycbcr.YCbCr | image.YCbCr |
ycbcr.SubsampleRatio444 | image.YCbCrSubsampleRatio444 |
ycbcr.SubsampleRatio422 | image.YCbCrSubsampleRatio422 |
ycbcr.SubsampleRatio420 | image.YCbCrSubsampleRatio420 |
image.ColorImage | image.Uniform |
+The image package's New
functions
+(NewRGBA
,
+NewRGBA64
, etc.)
+take an image.Rectangle
as an argument
+instead of four integers.
+
+Finally, there are new predefined color.Color
variables
+color.Black
,
+color.White
,
+color.Opaque
+and
+color.Transparent
.
+
+Updating:
+Running go
fix
will update almost all code affected by the change.
+
+In Go 1, the syslog.NewLogger
+function returns an error as well as a log.Logger
.
+
+Updating: +What little code is affected will be caught by the compiler and must be updated by hand. +
+ +
+In Go 1, the FormatMediaType
function
+of the mime
package has been simplified to make it
+consistent with
+ParseMediaType
.
+It now takes "text/html"
rather than "text"
and "html"
.
+
+Updating: +What little code is affected will be caught by the compiler and must be updated by hand. +
+ +
+In Go 1, the various SetTimeout
,
+SetReadTimeout
, and SetWriteTimeout
methods
+have been replaced with
+SetDeadline
,
+SetReadDeadline
, and
+SetWriteDeadline
,
+respectively. Rather than taking a timeout value in nanoseconds that
+apply to any activity on the connection, the new methods set an
+absolute deadline (as a time.Time
value) after which
+reads and writes will time out and no longer block.
+
+There are also new functions
+net.DialTimeout
+to simplify timing out dialing a network address and
+net.ListenMulticastUDP
+to allow multicast UDP to listen concurrently across multiple listeners.
+The net.ListenMulticastUDP
function replaces the old
+JoinGroup
and LeaveGroup
methods.
+
+Updating: +Code that uses the old methods will fail to compile and must be updated by hand. +The semantic change makes it difficult for the fix tool to update automatically. +
+ +
+The Time
function has been removed; callers should use
+the Time
type from the
+time
package.
+
+The Exec
function has been removed; callers should use
+Exec
from the syscall
package, where available.
+
+The ShellExpand
function has been renamed to ExpandEnv
.
+
+The NewFile
function
+now takes a uintptr
fd, instead of an int
.
+The Fd
method on files now
+also returns a uintptr
.
+
+There are no longer error constants such as EINVAL
+in the os
package, since the set of values varied with
+the underlying operating system. There are new portable functions like
+IsPermission
+to test common error properties, plus a few new error values
+with more Go-like names, such as
+ErrPermission
+and
+ErrNotExist
.
+
+The Getenverror
function has been removed. To distinguish
+between a non-existent environment variable and an empty string,
+use os.Environ
or
+syscall.Getenv
.
+
+The Process.Wait
method has
+dropped its option argument and the associated constants are gone
+from the package.
+Also, the function Wait
is gone; only the method of
+the Process
type persists.
+
+The Waitmsg
type returned by
+Process.Wait
+has been replaced with a more portable
+ProcessState
+type with accessor methods to recover information about the
+process.
+Because of changes to Wait
, the ProcessState
+value always describes an exited process.
+Portability concerns simplified the interface in other ways, but the values returned by the
+ProcessState.Sys
and
+ProcessState.SysUsage
+methods can be type-asserted to underlying system-specific data structures such as
+syscall.WaitStatus
and
+syscall.Rusage
on Unix.
+
+Updating:
+Running go
fix
will drop a zero argument to Process.Wait
.
+All other changes will be caught by the compiler and must be updated by hand.
+
+Go 1 redefines the os.FileInfo
type,
+changing it from a struct to an interface:
+
+ type FileInfo interface { + Name() string // base name of the file + Size() int64 // length in bytes + Mode() FileMode // file mode bits + ModTime() time.Time // modification time + IsDir() bool // abbreviation for Mode().IsDir() + Sys() interface{} // underlying data source (can return nil) + } ++ +
+The file mode information has been moved into a subtype called
+os.FileMode
,
+a simple integer type with IsDir
, Perm
, and String
+methods.
+
+The system-specific details of file modes and properties such as (on Unix)
+i-number have been removed from FileInfo
altogether.
+Instead, each operating system's os
package provides an
+implementation of the FileInfo
interface, which
+has a Sys
method that returns the
+system-specific representation of file metadata.
+For instance, to discover the i-number of a file on a Unix system, unpack
+the FileInfo
like this:
+
+ fi, err := os.Stat("hello.go") + if err != nil { + log.Fatal(err) + } + // Check that it's a Unix file. + unixStat, ok := fi.Sys().(*syscall.Stat_t) + if !ok { + log.Fatal("hello.go: not a Unix file") + } + fmt.Printf("file i-number: %d\n", unixStat.Ino) ++ +
+Assuming (which is unwise) that "hello.go"
is a Unix file,
+the i-number expression could be contracted to
+
+ fi.Sys().(*syscall.Stat_t).Ino ++ +
+The vast majority of uses of FileInfo
need only the methods
+of the standard interface.
+
+The os
package no longer contains wrappers for the POSIX errors
+such as ENOENT
.
+For the few programs that need to verify particular error conditions, there are
+now the boolean functions
+IsExist
,
+IsNotExist
+and
+IsPermission
.
+
+Updating:
+Running go
fix
will update code that uses the old equivalent of the current os.FileInfo
+and os.FileMode
API.
+Code that needs system-specific file details will need to be updated by hand.
+Code that uses the old POSIX error values from the os
package
+will fail to compile and will also need to be updated by hand.
+
+The os/signal
package in Go 1 replaces the
+Incoming
function, which returned a channel
+that received all incoming signals,
+with the selective Notify
function, which asks
+for delivery of specific signals on an existing channel.
+
+Updating: +Code must be updated by hand. +A literal translation of +
++c := signal.Incoming() ++
+is +
++c := make(chan os.Signal, 1) +signal.Notify(c) // ask for all signals ++
+but most code should list the specific signals it wants to handle instead: +
++c := make(chan os.Signal, 1) +signal.Notify(c, syscall.SIGHUP, syscall.SIGQUIT) ++ +
+In Go 1, the Walk
function of the
+path/filepath
package
+has been changed to take a function value of type
+WalkFunc
+instead of a Visitor
interface value.
+WalkFunc
unifies the handling of both files and directories.
+
+ type WalkFunc func(path string, info os.FileInfo, err error) error ++ +
+The WalkFunc
function will be called even for files or directories that could not be opened;
+in such cases the error argument will describe the failure.
+If a directory's contents are to be skipped,
+the function should return the value filepath.SkipDir
+
+Updating: +The change simplifies most code but has subtle consequences, so affected programs +will need to be updated by hand. +The compiler will catch code using the old interface. +
+ +
+The regexp
package has been rewritten.
+It has the same interface but the specification of the regular expressions
+it supports has changed from the old "egrep" form to that of
+RE2.
+
+Updating: +Code that uses the package should have its regular expressions checked by hand. +
+ +
+In Go 1, much of the API exported by package
+runtime
has been removed in favor of
+functionality provided by other packages.
+Code using the runtime.Type
interface
+or its specific concrete type implementations should
+now use package reflect
.
+Code using runtime.Semacquire
or runtime.Semrelease
+should use channels or the abstractions in package sync
.
+The runtime.Alloc
, runtime.Free
,
+and runtime.Lookup
functions, an unsafe API created for
+debugging the memory allocator, have no replacement.
+
+Before, runtime.MemStats
was a global variable holding
+statistics about memory allocation, and calls to runtime.UpdateMemStats
+ensured that it was up to date.
+In Go 1, runtime.MemStats
is a struct type, and code should use
+runtime.ReadMemStats
+to obtain the current statistics.
+
+The package adds a new function,
+runtime.NumCPU
, that returns the number of CPUs available
+for parallel execution, as reported by the operating system kernel.
+Its value can inform the setting of GOMAXPROCS
.
+The runtime.Cgocalls
and runtime.Goroutines
functions
+have been renamed to runtime.NumCgoCall
and runtime.NumGoroutine
.
+
+Updating:
+Running go
fix
will update code for the function renamings.
+Other code will need to be updated by hand.
+
+In Go 1, the
+strconv
+package has been significantly reworked to make it more Go-like and less C-like,
+although Atoi
lives on (it's similar to
+int(ParseInt(x, 10, 0))
, as does
+Itoa(x)
(FormatInt(int64(x), 10)
).
+There are also new variants of some of the functions that append to byte slices rather than
+return strings, to allow control over allocation.
+
+This table summarizes the renamings; see the +package documentation +for full details. +
+ +Old call | +New call | +
---|---|
Atob(x) | ParseBool(x) |
Atof32(x) | ParseFloat(x, 32)§ |
Atof64(x) | ParseFloat(x, 64) |
AtofN(x, n) | ParseFloat(x, n) |
Atoi(x) | Atoi(x) |
Atoi(x) | ParseInt(x, 10, 0)§ |
Atoi64(x) | ParseInt(x, 10, 64) |
Atoui(x) | ParseUint(x, 10, 0)§ |
Atoui64(x) | ParseUint(x, 10, 64) |
Btoi64(x, b) | ParseInt(x, b, 64) |
Btoui64(x, b) | ParseUint(x, b, 64) |
Btoa(x) | FormatBool(x) |
Ftoa32(x, f, p) | FormatFloat(float64(x), f, p, 32) |
Ftoa64(x, f, p) | FormatFloat(x, f, p, 64) |
FtoaN(x, f, p, n) | FormatFloat(x, f, p, n) |
Itoa(x) | Itoa(x) |
Itoa(x) | FormatInt(int64(x), 10) |
Itoa64(x) | FormatInt(x, 10) |
Itob(x, b) | FormatInt(int64(x), b) |
Itob64(x, b) | FormatInt(x, b) |
Uitoa(x) | FormatUint(uint64(x), 10) |
Uitoa64(x) | FormatUint(x, 10) |
Uitob(x, b) | FormatUint(uint64(x), b) |
Uitob64(x, b) | FormatUint(x, b) |
+Updating:
+Running go
fix
will update almost all code affected by the change.
+
+§ Atoi
persists but Atoui
and Atof32
do not, so
+they may require
+a cast that must be added by hand; the go
fix
tool will warn about it.
+
+The template
and exp/template/html
packages have moved to
+text/template
and
+html/template
.
+More significant, the interface to these packages has been simplified.
+The template language is the same, but the concept of "template set" is gone
+and the functions and methods of the packages have changed accordingly,
+often by elimination.
+
+Instead of sets, a Template
object
+may contain multiple named template definitions,
+in effect constructing
+name spaces for template invocation.
+A template can invoke any other template associated with it, but only those
+templates associated with it.
+The simplest way to associate templates is to parse them together, something
+made easier with the new structure of the packages.
+
+Updating:
+The imports will be updated by fix tool.
+Single-template uses will be otherwise be largely unaffected.
+Code that uses multiple templates in concert will need to be updated by hand.
+The examples in
+the documentation for text/template
can provide guidance.
+
+The testing package has a type, B
, passed as an argument to benchmark functions.
+In Go 1, B
has new methods, analogous to those of T
, enabling
+logging and failure reporting.
+
+Updating:
+Existing code is unaffected, although benchmarks that use println
+or panic
should be updated to use the new methods.
+
+The testing/script package has been deleted. It was a dreg. +
+ ++Updating: +No code is likely to be affected. +
+ +
+In Go 1, the functions
+unsafe.Typeof
, unsafe.Reflect
,
+unsafe.Unreflect
, unsafe.New
, and
+unsafe.NewArray
have been removed;
+they duplicated safer functionality provided by
+package reflect
.
+
+Updating:
+Code using these functions must be rewritten to use
+package reflect
.
+The changes to encoding/gob and the protocol buffer library
+may be helpful as examples.
+
+In Go 1 several fields from the url.URL
type
+were removed or replaced.
+
+The String
method now
+predictably rebuilds an encoded URL string using all of URL
's
+fields as necessary. The resulting string will also no longer have
+passwords escaped.
+
+The Raw
field has been removed. In most cases the String
+method may be used in its place.
+
+The old RawUserinfo
field is replaced by the User
+field, of type *net.Userinfo
.
+Values of this type may be created using the new net.User
+and net.UserPassword
+functions. The EscapeUserinfo
and UnescapeUserinfo
+functions are also gone.
+
+The RawAuthority
field has been removed. The same information is
+available in the Host
and User
fields.
+
+The RawPath
field and the EncodedPath
method have
+been removed. The path information in rooted URLs (with a slash following the
+schema) is now available only in decoded form in the Path
field.
+Occasionally, the encoded data may be required to obtain information that
+was lost in the decoding process. These cases must be handled by accessing
+the data the URL was built from.
+
+URLs with non-rooted paths, such as "mailto:dev@golang.org?subject=Hi"
,
+are also handled differently. The OpaquePath
boolean field has been
+removed and a new Opaque
string field introduced to hold the encoded
+path for such URLs. In Go 1, the cited URL parses as:
+
+ URL{ + Scheme: "mailto", + Opaque: "dev@golang.org", + RawQuery: "subject=Hi", + } ++ +
+A new RequestURI
method was
+added to URL
.
+
+The ParseWithReference
function has been renamed to ParseWithFragment
.
+
+Updating: +Code that uses the old fields will fail to compile and must be updated by hand. +The semantic changes make it difficult for the fix tool to update automatically. +
+ +
+Go 1 introduces the go command, a tool for fetching,
+building, and installing Go packages and commands. The go
command
+does away with makefiles, instead using Go source code to find dependencies and
+determine build conditions. Most existing Go programs will no longer require
+makefiles to be built.
+
+See How to Write Go Code for a primer on the
+go
command and the go command documentation
+for the full details.
+
+Updating:
+Projects that depend on the Go project's old makefile-based build
+infrastructure (Make.pkg
, Make.cmd
, and so on) should
+switch to using the go
command for building Go code and, if
+necessary, rewrite their makefiles to perform any auxiliary build tasks.
+
+In Go 1, the cgo command
+uses a different _cgo_export.h
+file, which is generated for packages containing //export
lines.
+The _cgo_export.h
file now begins with the C preamble comment,
+so that exported function definitions can use types defined there.
+This has the effect of compiling the preamble multiple times, so a
+package using //export
must not put function definitions
+or variable initializations in the C preamble.
+
+One of the most significant changes associated with Go 1 is the availability +of prepackaged, downloadable distributions. +They are available for many combinations of architecture and operating system +(including Windows) and the list will grow. +Installation details are described on the +Getting Started page, while +the distributions themselves are listed on the +downloads page. diff --git a/_content/doc/go1compat.html b/_content/doc/go1compat.html new file mode 100644 index 00000000..a5624ef5 --- /dev/null +++ b/_content/doc/go1compat.html @@ -0,0 +1,202 @@ + + +
+The release of Go version 1, Go 1 for short, is a major milestone +in the development of the language. Go 1 is a stable platform for +the growth of programs and projects written in Go. +
+ ++Go 1 defines two things: first, the specification of the language; +and second, the specification of a set of core APIs, the "standard +packages" of the Go library. The Go 1 release includes their +implementation in the form of two compiler suites (gc and gccgo), +and the core libraries themselves. +
+ ++It is intended that programs written to the Go 1 specification will +continue to compile and run correctly, unchanged, over the lifetime +of that specification. At some indefinite point, a Go 2 specification +may arise, but until that time, Go programs that work today should +continue to work even as future "point" releases of Go 1 arise (Go +1.1, Go 1.2, etc.). +
+ ++Compatibility is at the source level. Binary compatibility for +compiled packages is not guaranteed between releases. After a point +release, Go source will need to be recompiled to link against the +new release. +
+ ++The APIs may grow, acquiring new packages and features, but not in +a way that breaks existing Go 1 code. +
+ ++Although we expect that the vast majority of programs will maintain +this compatibility over time, it is impossible to guarantee that +no future change will break any program. This document is an attempt +to set expectations for the compatibility of Go 1 software in the +future. There are a number of ways in which a program that compiles +and runs today may fail to do so after a future point release. They +are all unlikely but worth recording. +
+ +import . "path"
, additional names defined in the
+imported package in future releases may conflict with other names
+defined in the program. We do not recommend the use of import .
+outside of tests, and using it may cause a program to fail
+to compile in future releases.
+unsafe
. Packages that import
+unsafe
+may depend on internal properties of the Go implementation.
+We reserve the right to make changes to the implementation
+that may break such programs.
++Of course, for all of these possibilities, should they arise, we +would endeavor whenever feasible to update the specification, +compilers, or libraries without affecting existing code. +
+ ++These same considerations apply to successive point releases. For +instance, code that runs under Go 1.2 should be compatible with Go +1.2.1, Go 1.3, Go 1.4, etc., although not necessarily with Go 1.1 +since it may use features added only in Go 1.2 +
+ ++Features added between releases, available in the source repository +but not part of the numbered binary releases, are under active +development. No promise of compatibility is made for software using +such features until they have been released. +
+ ++Finally, although it is not a correctness issue, it is possible +that the performance of a program may be affected by +changes in the implementation of the compilers or libraries upon +which it depends. +No guarantee can be made about the performance of a +given program between releases. +
+ ++Although these expectations apply to Go 1 itself, we hope similar +considerations would be made for the development of externally +developed software based on Go 1. +
+ ++Code in sub-repositories of the main go tree, such as +golang.org/x/net, +may be developed under +looser compatibility requirements. However, the sub-repositories +will be tagged as appropriate to identify versions that are compatible +with the Go 1 point releases. +
+ +
+It is impossible to guarantee long-term compatibility with operating
+system interfaces, which are changed by outside parties.
+The syscall
package
+is therefore outside the purview of the guarantees made here.
+As of Go version 1.4, the syscall
package is frozen.
+Any evolution of the system call interface must be supported elsewhere,
+such as in the
+go.sys subrepository.
+For details and background, see
+this document.
+
+Finally, the Go toolchain (compilers, linkers, build tools, and so +on) is under active development and may change behavior. This +means, for instance, that scripts that depend on the location and +properties of the tools may be broken by a point release. +
+ ++These caveats aside, we believe that Go 1 will be a firm foundation +for the development of Go and its ecosystem. +
diff --git a/_content/doc/go_faq.html b/_content/doc/go_faq.html new file mode 100644 index 00000000..23a3080c --- /dev/null +++ b/_content/doc/go_faq.html @@ -0,0 +1,2475 @@ + + ++At the time of Go's inception, only a decade ago, the programming world was different from today. +Production software was usually written in C++ or Java, +GitHub did not exist, most computers were not yet multiprocessors, +and other than Visual Studio and Eclipse there were few IDEs or other high-level tools available +at all, let alone for free on the Internet. +
+ ++Meanwhile, we had become frustrated by the undue complexity required to use +the languages we worked with to develop server software. +Computers had become enormously quicker since languages such as +C, C++ and Java were first developed but the act of programming had not +itself advanced nearly as much. +Also, it was clear that multiprocessors were becoming universal but +most languages offered little help to program them efficiently +and safely. +
+ ++We decided to take a step back and think about what major issues were +going to dominate software engineering in the years ahead as technology +developed, and how a new language might help address them. +For instance, the rise of multicore CPUs argued that a language should +provide first-class support for some sort of concurrency or parallelism. +And to make resource management tractable in a large concurrent program, +garbage collection, or at least some sort of safe automatic memory management was required. +
+ ++These considerations led to +a +series of discussions from which Go arose, first as a set of ideas and +desiderata, then as a language. +An overarching goal was that Go do more to help the working programmer +by enabling tooling, automating mundane tasks such as code formatting, +and removing obstacles to working on large code bases. +
+ ++A much more expansive description of the goals of Go and how +they are met, or at least approached, is available in the article, +Go at Google: +Language Design in the Service of Software Engineering. +
+ ++Robert Griesemer, Rob Pike and Ken Thompson started sketching the +goals for a new language on the white board on September 21, 2007. +Within a few days the goals had settled into a plan to do something +and a fair idea of what it would be. Design continued part-time in +parallel with unrelated work. By January 2008, Ken had started work +on a compiler with which to explore ideas; it generated C code as its +output. By mid-year the language had become a full-time project and +had settled enough to attempt a production compiler. In May 2008, +Ian Taylor independently started on a GCC front end for Go using the +draft specification. Russ Cox joined in late 2008 and helped move the language +and libraries from prototype to reality. +
+ ++Go became a public open source project on November 10, 2009. +Countless people from the community have contributed ideas, discussions, and code. +
+ ++There are now millions of Go programmers—gophers—around the world, +and there are more every day. +Go's success has far exceeded our expectations. +
+ ++The mascot and logo were designed by +Renée French, who also designed +Glenda, +the Plan 9 bunny. +A blog post +about the gopher explains how it was +derived from one she used for a WFMU +T-shirt design some years ago. +The logo and mascot are covered by the +Creative Commons Attribution 3.0 +license. +
+ ++The gopher has a +model sheet +illustrating his characteristics and how to represent them correctly. +The model sheet was first shown in a +talk +by Renée at Gophercon in 2016. +He has unique features; he's the Go gopher, not just any old gopher. +
+ ++The language is called Go. +The "golang" moniker arose because the web site is +golang.org, not +go.org, which was not available to us. +Many use the golang name, though, and it is handy as +a label. +For instance, the Twitter tag for the language is "#golang". +The language's name is just plain Go, regardless. +
+ ++A side note: Although the +official logo +has two capital letters, the language name is written Go, not GO. +
+ ++Go was born out of frustration with existing languages and +environments for the work we were doing at Google. +Programming had become too +difficult and the choice of languages was partly to blame. One had to +choose either efficient compilation, efficient execution, or ease of +programming; all three were not available in the same mainstream +language. Programmers who could were choosing ease over +safety and efficiency by moving to dynamically typed languages such as +Python and JavaScript rather than C++ or, to a lesser extent, Java. +
+ ++We were not alone in our concerns. +After many years with a pretty quiet landscape for programming languages, +Go was among the first of several new languages—Rust, +Elixir, Swift, and more—that have made programming language development +an active, almost mainstream field again. +
+ ++Go addressed these issues by attempting to combine the ease of programming of an interpreted, +dynamically typed +language with the efficiency and safety of a statically typed, compiled language. +It also aimed to be modern, with support for networked and multicore +computing. Finally, working with Go is intended to be fast: it should take +at most a few seconds to build a large executable on a single computer. +To meet these goals required addressing a number of +linguistic issues: an expressive but lightweight type system; +concurrency and garbage collection; rigid dependency specification; +and so on. These cannot be addressed well by libraries or tools; a new +language was called for. +
+ ++The article Go at Google +discusses the background and motivation behind the design of the Go language, +as well as providing more detail about many of the answers presented in this FAQ. +
+ + ++Go is mostly in the C family (basic syntax), +with significant input from the Pascal/Modula/Oberon +family (declarations, packages), +plus some ideas from languages +inspired by Tony Hoare's CSP, +such as Newsqueak and Limbo (concurrency). +However, it is a new language across the board. +In every respect the language was designed by thinking +about what programmers do and how to make programming, at least the +kind of programming we do, more effective, which means more fun. +
+ ++When Go was designed, Java and C++ were the most commonly +used languages for writing servers, at least at Google. +We felt that these languages required +too much bookkeeping and repetition. +Some programmers reacted by moving towards more dynamic, +fluid languages like Python, at the cost of efficiency and +type safety. +We felt it should be possible to have the efficiency, +the safety, and the fluidity in a single language. +
+ +
+Go attempts to reduce the amount of typing in both senses of the word.
+Throughout its design, we have tried to reduce clutter and
+complexity. There are no forward declarations and no header files;
+everything is declared exactly once. Initialization is expressive,
+automatic, and easy to use. Syntax is clean and light on keywords.
+Stuttering (foo.Foo* myFoo = new(foo.Foo)
) is reduced by
+simple type derivation using the :=
+declare-and-initialize construct. And perhaps most radically, there
+is no type hierarchy: types just are, they don't have to
+announce their relationships. These simplifications allow Go to be
+expressive yet comprehensible without sacrificing, well, sophistication.
+
+Another important principle is to keep the concepts orthogonal. +Methods can be implemented for any type; structures represent data while +interfaces represent abstraction; and so on. Orthogonality makes it +easier to understand what happens when things combine. +
+ +
+Yes. Go is used widely in production inside Google.
+One easy example is the server behind
+golang.org.
+It's just the godoc
+document server running in a production configuration on
+Google App Engine.
+
+A more significant instance is Google's download server, dl.google.com
,
+which delivers Chrome binaries and other large installables such as apt-get
+packages.
+
+Go is not the only language used at Google, far from it, but it is a key language +for a number of areas including +site reliability +engineering (SRE) +and large-scale data processing. +
+ ++Go usage is growing worldwide, especially but by no means exclusively +in the cloud computing space. +A couple of major cloud infrastructure projects written in Go are +Docker and Kubernetes, +but there are many more. +
+ ++It's not just cloud, though. +The Go Wiki includes a +page, +updated regularly, that lists some of the many companies using Go. +
+ ++The Wiki also has a page with links to +success stories +about companies and projects that are using the language. +
+ ++It is possible to use C and Go together in the same address space, +but it is not a natural fit and can require special interface software. +Also, linking C with Go code gives up the memory +safety and stack management properties that Go provides. +Sometimes it's absolutely necessary to use C libraries to solve a problem, +but doing so always introduces an element of risk not present with +pure Go code, so do so with care. +
+ +
+If you do need to use C with Go, how to proceed depends on the Go
+compiler implementation.
+There are three Go compiler implementations supported by the
+Go team.
+These are gc
, the default compiler,
+gccgo
, which uses the GCC back end,
+and a somewhat less mature gollvm
, which uses the LLVM infrastructure.
+
+Gc
uses a different calling convention and linker from C and
+therefore cannot be called directly from C programs, or vice versa.
+The cgo
program provides the mechanism for a
+“foreign function interface” to allow safe calling of
+C libraries from Go code.
+SWIG extends this capability to C++ libraries.
+
+You can also use cgo
and SWIG with Gccgo
and gollvm
.
+Since they use a traditional API, it's also possible, with great care,
+to link code from these compilers directly with GCC/LLVM-compiled C or C++ programs.
+However, doing so safely requires an understanding of the calling conventions for
+all languages concerned, as well as concern for stack limits when calling C or C++
+from Go.
+
+The Go project does not include a custom IDE, but the language and +libraries have been designed to make it easy to analyze source code. +As a consequence, most well-known editors and IDEs support Go well, +either directly or through a plugin. +
+ ++The list of well-known IDEs and editors that have good Go support +available includes Emacs, Vim, VSCode, Atom, Eclipse, Sublime, IntelliJ +(through a custom variant called Goland), and many more. +Chances are your favorite environment is a productive one for +programming in Go. +
+ ++A separate open source project provides the necessary compiler plugin and library. +It is available at +github.com/golang/protobuf/. +
+ + ++Absolutely. We encourage developers to make Go Language sites in their own languages. +However, if you choose to add the Google logo or branding to your site +(it does not appear on golang.org), +you will need to abide by the guidelines at +www.google.com/permissions/guidelines.html +
+ +
+Go does have an extensive library, called the runtime,
+that is part of every Go program.
+The runtime library implements garbage collection, concurrency,
+stack management, and other critical features of the Go language.
+Although it is more central to the language, Go's runtime is analogous
+to libc
, the C library.
+
+It is important to understand, however, that Go's runtime does not +include a virtual machine, such as is provided by the Java runtime. +Go programs are compiled ahead of time to native machine code +(or JavaScript or WebAssembly, for some variant implementations). +Thus, although the term is often used to describe the virtual +environment in which a program runs, in Go the word “runtime” +is just the name given to the library providing critical language services. +
+ ++When designing Go, we wanted to make sure that it was not +overly ASCII-centric, +which meant extending the space of identifiers from the +confines of 7-bit ASCII. +Go's rule—identifier characters must be +letters or digits as defined by Unicode—is simple to understand +and to implement but has restrictions. +Combining characters are +excluded by design, for instance, +and that excludes some languages such as Devanagari. +
+ +
+This rule has one other unfortunate consequence.
+Since an exported identifier must begin with an
+upper-case letter, identifiers created from characters
+in some languages can, by definition, not be exported.
+For now the
+only solution is to use something like X日本語
, which
+is clearly unsatisfactory.
+
+Since the earliest version of the language, there has been considerable +thought into how best to expand the identifier space to accommodate +programmers using other native languages. +Exactly what to do remains an active topic of discussion, and a future +version of the language may be more liberal in its definition +of an identifier. +For instance, it might adopt some of the ideas from the Unicode +organization's recommendations +for identifiers. +Whatever happens, it must be done compatibly while preserving +(or perhaps expanding) the way letter case determines visibility of +identifiers, which remains one of our favorite features of Go. +
+ ++For the time being, we have a simple rule that can be expanded later +without breaking programs, one that avoids bugs that would surely arise +from a rule that admits ambiguous identifiers. +
+ ++Every language contains novel features and omits someone's favorite +feature. Go was designed with an eye on felicity of programming, speed of +compilation, orthogonality of concepts, and the need to support features +such as concurrency and garbage collection. Your favorite feature may be +missing because it doesn't fit, because it affects compilation speed or +clarity of design, or because it would make the fundamental system model +too difficult. +
+ ++If it bothers you that Go is missing feature X, +please forgive us and investigate the features that Go does have. You might find that +they compensate in interesting ways for the lack of X. +
+ ++Generics may well be added at some point. We don't feel an urgency for +them, although we understand some programmers do. +
+ ++Go was intended as a language for writing server programs that would be +easy to maintain over time. +(See this +article for more background.) +The design concentrated on things like scalability, readability, and +concurrency. +Polymorphic programming did not seem essential to the language's +goals at the time, and so was left out for simplicity. +
+ ++The language is more mature now, and there is scope to consider +some form of generic programming. +However, there remain some caveats. +
+ ++Generics are convenient but they come at a cost in +complexity in the type system and run-time. We haven't yet found a +design that gives value proportionate to the complexity, although we +continue to think about it. Meanwhile, Go's built-in maps and slices, +plus the ability to use the empty interface to construct containers +(with explicit unboxing) mean in many cases it is possible to write +code that does what generics would enable, if less smoothly. +
+ ++The topic remains open. +For a look at several previous unsuccessful attempts to +design a good generics solution for Go, see +this proposal. +
+ +
+We believe that coupling exceptions to a control
+structure, as in the try-catch-finally
idiom, results in
+convoluted code. It also tends to encourage programmers to label
+too many ordinary errors, such as failing to open a file, as
+exceptional.
+
+Go takes a different approach. For plain error handling, Go's multi-value +returns make it easy to report an error without overloading the return value. +A canonical error type, coupled +with Go's other features, makes error handling pleasant but quite different +from that in other languages. +
+ ++Go also has a couple +of built-in functions to signal and recover from truly exceptional +conditions. The recovery mechanism is executed only as part of a +function's state being torn down after an error, which is sufficient +to handle catastrophe but requires no extra control structures and, +when used well, can result in clean error-handling code. +
+ ++See the Defer, Panic, and Recover article for details. +Also, the Errors are values blog post +describes one approach to handling errors cleanly in Go by demonstrating that, +since errors are just values, the full power of Go can be deployed in error handling. +
+ ++Go doesn't provide assertions. They are undeniably convenient, but our +experience has been that programmers use them as a crutch to avoid thinking +about proper error handling and reporting. Proper error handling means that +servers continue to operate instead of crashing after a non-fatal error. +Proper error reporting means that errors are direct and to the point, +saving the programmer from interpreting a large crash trace. Precise +errors are particularly important when the programmer seeing the errors is +not familiar with the code. +
+ ++We understand that this is a point of contention. There are many things in +the Go language and libraries that differ from modern practices, simply +because we feel it's sometimes worth trying a different approach. +
+ ++Concurrency and multi-threaded programming have over time +developed a reputation for difficulty. We believe this is due partly to complex +designs such as +pthreads +and partly to overemphasis on low-level details +such as mutexes, condition variables, and memory barriers. +Higher-level interfaces enable much simpler code, even if there are still +mutexes and such under the covers. +
+ ++One of the most successful models for providing high-level linguistic support +for concurrency comes from Hoare's Communicating Sequential Processes, or CSP. +Occam and Erlang are two well known languages that stem from CSP. +Go's concurrency primitives derive from a different part of the family tree +whose main contribution is the powerful notion of channels as first class objects. +Experience with several earlier languages has shown that the CSP model +fits well into a procedural language framework. +
+ ++Goroutines are part of making concurrency easy to use. The idea, which has +been around for a while, is to multiplex independently executing +functions—coroutines—onto a set of threads. +When a coroutine blocks, such as by calling a blocking system call, +the run-time automatically moves other coroutines on the same operating +system thread to a different, runnable thread so they won't be blocked. +The programmer sees none of this, which is the point. +The result, which we call goroutines, can be very cheap: they have little +overhead beyond the memory for the stack, which is just a few kilobytes. +
+ ++To make the stacks small, Go's run-time uses resizable, bounded stacks. A newly +minted goroutine is given a few kilobytes, which is almost always enough. +When it isn't, the run-time grows (and shrinks) the memory for storing +the stack automatically, allowing many goroutines to live in a modest +amount of memory. +The CPU overhead averages about three cheap instructions per function call. +It is practical to create hundreds of thousands of goroutines in the same +address space. +If goroutines were just threads, system resources would +run out at a much smaller number. +
+ ++After long discussion it was decided that the typical use of maps did not require +safe access from multiple goroutines, and in those cases where it did, the map was +probably part of some larger data structure or computation that was already +synchronized. Therefore requiring that all map operations grab a mutex would slow +down most programs and add safety to few. This was not an easy decision, +however, since it means uncontrolled map access can crash the program. +
+ ++The language does not preclude atomic map updates. When required, such +as when hosting an untrusted program, the implementation could interlock +map access. +
+ +
+Map access is unsafe only when updates are occurring.
+As long as all goroutines are only reading—looking up elements in the map,
+including iterating through it using a
+for
range
loop—and not changing the map
+by assigning to elements or doing deletions,
+it is safe for them to access the map concurrently without synchronization.
+
+As an aid to correct map use, some implementations of the language +contain a special check that automatically reports at run time when a map is modified +unsafely by concurrent execution. +
+ ++People often suggest improvements to the language—the +mailing list +contains a rich history of such discussions—but very few of these changes have +been accepted. +
+ ++Although Go is an open source project, the language and libraries are protected +by a compatibility promise that prevents +changes that break existing programs, at least at the source code level +(programs may need to be recompiled occasionally to stay current). +If your proposal violates the Go 1 specification we cannot even entertain the +idea, regardless of its merit. +A future major release of Go may be incompatible with Go 1, but discussions +on that topic have only just begun and one thing is certain: +there will be very few such incompatibilities introduced in the process. +Moreover, the compatibility promise encourages us to provide an automatic path +forward for old programs to adapt should that situation arise. +
+ ++Even if your proposal is compatible with the Go 1 spec, it might +not be in the spirit of Go's design goals. +The article Go +at Google: Language Design in the Service of Software Engineering +explains Go's origins and the motivation behind its design. +
+ ++Yes and no. Although Go has types and methods and allows an +object-oriented style of programming, there is no type hierarchy. +The concept of “interface” in Go provides a different approach that +we believe is easy to use and in some ways more general. There are +also ways to embed types in other types to provide something +analogous—but not identical—to subclassing. +Moreover, methods in Go are more general than in C++ or Java: +they can be defined for any sort of data, even built-in types such +as plain, “unboxed” integers. +They are not restricted to structs (classes). +
+ ++Also, the lack of a type hierarchy makes “objects” in Go feel much more +lightweight than in languages such as C++ or Java. +
+ ++The only way to have dynamically dispatched methods is through an +interface. Methods on a struct or any other concrete type are always resolved statically. +
+ ++Object-oriented programming, at least in the best-known languages, +involves too much discussion of the relationships between types, +relationships that often could be derived automatically. Go takes a +different approach. +
+ ++Rather than requiring the programmer to declare ahead of time that two +types are related, in Go a type automatically satisfies any interface +that specifies a subset of its methods. Besides reducing the +bookkeeping, this approach has real advantages. Types can satisfy +many interfaces at once, without the complexities of traditional +multiple inheritance. +Interfaces can be very lightweight—an interface with +one or even zero methods can express a useful concept. +Interfaces can be added after the fact if a new idea comes along +or for testing—without annotating the original types. +Because there are no explicit relationships between types +and interfaces, there is no type hierarchy to manage or discuss. +
+ +
+It's possible to use these ideas to construct something analogous to
+type-safe Unix pipes. For instance, see how fmt.Fprintf
+enables formatted printing to any output, not just a file, or how the
+bufio
package can be completely separate from file I/O,
+or how the image
packages generate compressed
+image files. All these ideas stem from a single interface
+(io.Writer
) representing a single method
+(Write
). And that's only scratching the surface.
+Go's interfaces have a profound influence on how programs are structured.
+
+It takes some getting used to but this implicit style of type +dependency is one of the most productive things about Go. +
+ +len
a function and not a method?
+We debated this issue but decided
+implementing len
and friends as functions was fine in practice and
+didn't complicate questions about the interface (in the Go type sense)
+of basic types.
+
+Method dispatch is simplified if it doesn't need to do type matching as well. +Experience with other languages told us that having a variety of +methods with the same name but different signatures was occasionally useful +but that it could also be confusing and fragile in practice. Matching only by name +and requiring consistency in the types was a major simplifying decision +in Go's type system. +
+ ++Regarding operator overloading, it seems more a convenience than an absolute +requirement. Again, things are simpler without it. +
+ ++A Go type satisfies an interface by implementing the methods of that interface, +nothing more. This property allows interfaces to be defined and used without +needing to modify existing code. It enables a kind of +structural typing that +promotes separation of concerns and improves code re-use, and makes it easier +to build on patterns that emerge as the code develops. +The semantics of interfaces is one of the main reasons for Go's nimble, +lightweight feel. +
+ ++See the question on type inheritance for more detail. +
+ +
+You can ask the compiler to check that the type T
implements the
+interface I
by attempting an assignment using the zero value for
+T
or pointer to T
, as appropriate:
+
+type T struct{} +var _ I = T{} // Verify that T implements I. +var _ I = (*T)(nil) // Verify that *T implements I. ++ +
+If T
(or *T
, accordingly) doesn't implement
+I
, the mistake will be caught at compile time.
+
+If you wish the users of an interface to explicitly declare that they implement +it, you can add a method with a descriptive name to the interface's method set. +For example: +
+ ++type Fooer interface { + Foo() + ImplementsFooer() +} ++ +
+A type must then implement the ImplementsFooer
method to be a
+Fooer
, clearly documenting the fact and announcing it in
+go doc's output.
+
+type Bar struct{} +func (b Bar) ImplementsFooer() {} +func (b Bar) Foo() {} ++ +
+Most code doesn't make use of such constraints, since they limit the utility of +the interface idea. Sometimes, though, they're necessary to resolve ambiguities +among similar interfaces. +
+ ++Consider this simple interface to represent an object that can compare +itself with another value: +
+ ++type Equaler interface { + Equal(Equaler) bool +} ++ +
+and this type, T
:
+
+type T int +func (t T) Equal(u T) bool { return t == u } // does not satisfy Equaler ++ +
+Unlike the analogous situation in some polymorphic type systems,
+T
does not implement Equaler
.
+The argument type of T.Equal
is T
,
+not literally the required type Equaler
.
+
+In Go, the type system does not promote the argument of
+Equal
; that is the programmer's responsibility, as
+illustrated by the type T2
, which does implement
+Equaler
:
+
+type T2 int +func (t T2) Equal(u Equaler) bool { return t == u.(T2) } // satisfies Equaler ++ +
+Even this isn't like other type systems, though, because in Go any
+type that satisfies Equaler
could be passed as the
+argument to T2.Equal
, and at run time we must
+check that the argument is of type T2
.
+Some languages arrange to make that guarantee at compile time.
+
+A related example goes the other way: +
+ ++type Opener interface { + Open() Reader +} + +func (t T3) Open() *os.File ++ +
+In Go, T3
does not satisfy Opener
,
+although it might in another language.
+
+While it is true that Go's type system does less for the programmer +in such cases, the lack of subtyping makes the rules about +interface satisfaction very easy to state: are the function's names +and signatures exactly those of the interface? +Go's rule is also easy to implement efficiently. +We feel these benefits offset the lack of +automatic type promotion. Should Go one day adopt some form of polymorphic +typing, we expect there would be a way to express the idea of these +examples and also have them be statically checked. +
+ +
+Not directly.
+It is disallowed by the language specification because the two types
+do not have the same representation in memory.
+It is necessary to copy the elements individually to the destination
+slice. This example converts a slice of int
to a slice of
+interface{}
:
+
+t := []int{1, 2, 3, 4} +s := make([]interface{}, len(t)) +for i, v := range t { + s[i] = v +} ++ +
+type T1 int +type T2 int +var t1 T1 +var x = T2(t1) // OK +var st1 []T1 +var sx = ([]T2)(st1) // NOT OK ++ +
+In Go, types are closely tied to methods, in that every named type has +a (possibly empty) method set. +The general rule is that you can change the name of the type being +converted (and thus possibly change its method set) but you can't +change the name (and method set) of elements of a composite type. +Go requires you to be explicit about type conversions. +
+ +
+Under the covers, interfaces are implemented as two elements, a type T
+and a value V
.
+V
is a concrete value such as an int
,
+struct
or pointer, never an interface itself, and has
+type T
.
+For instance, if we store the int
value 3 in an interface,
+the resulting interface value has, schematically,
+(T=int
, V=3
).
+The value V
is also known as the interface's
+dynamic value,
+since a given interface variable might hold different values V
+(and corresponding types T
)
+during the execution of the program.
+
+An interface value is nil
only if the V
and T
+are both unset, (T=nil
, V
is not set),
+In particular, a nil
interface will always hold a nil
type.
+If we store a nil
pointer of type *int
inside
+an interface value, the inner type will be *int
regardless of the value of the pointer:
+(T=*int
, V=nil
).
+Such an interface value will therefore be non-nil
+even when the pointer value V
inside is nil
.
+
+This situation can be confusing, and arises when a nil
value is
+stored inside an interface value such as an error
return:
+
+func returnsError() error { + var p *MyError = nil + if bad() { + p = ErrBad + } + return p // Will always return a non-nil error. +} ++ +
+If all goes well, the function returns a nil
p
,
+so the return value is an error
interface
+value holding (T=*MyError
, V=nil
).
+This means that if the caller compares the returned error to nil
,
+it will always look as if there was an error even if nothing bad happened.
+To return a proper nil
error
to the caller,
+the function must return an explicit nil
:
+
+func returnsError() error { + if bad() { + return ErrBad + } + return nil +} ++ +
+It's a good idea for functions
+that return errors always to use the error
type in
+their signature (as we did above) rather than a concrete type such
+as *MyError
, to help guarantee the error is
+created correctly. As an example,
+os.Open
+returns an error
even though, if not nil
,
+it's always of concrete type
+*os.PathError
.
+
+Similar situations to those described here can arise whenever interfaces are used.
+Just keep in mind that if any concrete value
+has been stored in the interface, the interface will not be nil
.
+For more information, see
+The Laws of Reflection.
+
+Untagged unions would violate Go's memory safety +guarantees. +
+ ++Variant types, also known as algebraic types, provide a way to specify +that a value might take one of a set of other types, but only those +types. A common example in systems programming would specify that an +error is, say, a network error, a security error or an application +error and allow the caller to discriminate the source of the problem +by examining the type of the error. Another example is a syntax tree +in which each node can be a different type: declaration, statement, +assignment and so on. +
+ ++We considered adding variant types to Go, but after discussion +decided to leave them out because they overlap in confusing ways +with interfaces. What would happen if the elements of a variant type +were themselves interfaces? +
+ ++Also, some of what variant types address is already covered by the +language. The error example is easy to express using an interface +value to hold the error and a type switch to discriminate cases. The +syntax tree example is also doable, although not as elegantly. +
+ ++Covariant result types would mean that an interface like +
+ ++type Copyable interface { + Copy() interface{} +} ++ +
+would be satisfied by the method +
+ ++func (v Value) Copy() Value ++ +
because Value
implements the empty interface.
+In Go method types must match exactly, so Value
does not
+implement Copyable
.
+Go separates the notion of what a
+type does—its methods—from the type's implementation.
+If two methods return different types, they are not doing the same thing.
+Programmers who want covariant result types are often trying to
+express a type hierarchy through interfaces.
+In Go it's more natural to have a clean separation between interface
+and implementation.
+
+The convenience of automatic conversion between numeric types in C is +outweighed by the confusion it causes. When is an expression unsigned? +How big is the value? Does it overflow? Is the result portable, independent +of the machine on which it executes? +It also complicates the compiler; “the usual arithmetic conversions” +are not easy to implement and inconsistent across architectures. +For reasons of portability, we decided to make things clear and straightforward +at the cost of some explicit conversions in the code. +The definition of constants in Go—arbitrary precision values free +of signedness and size annotations—ameliorates matters considerably, +though. +
+ +
+A related detail is that, unlike in C, int
and int64
+are distinct types even if int
is a 64-bit type. The int
+type is generic; if you care about how many bits an integer holds, Go
+encourages you to be explicit.
+
+Although Go is strict about conversion between variables of different
+numeric types, constants in the language are much more flexible.
+Literal constants such as 23
, 3.14159
+and math.Pi
+occupy a sort of ideal number space, with arbitrary precision and
+no overflow or underflow.
+For instance, the value of math.Pi
is specified to 63 places
+in the source code, and constant expressions involving the value keep
+precision beyond what a float64
could hold.
+Only when the constant or constant expression is assigned to a
+variable—a memory location in the program—does
+it become a "computer" number with
+the usual floating-point properties and precision.
+
+Also, +because they are just numbers, not typed values, constants in Go can be +used more freely than variables, thereby softening some of the awkwardness +around the strict conversion rules. +One can write expressions such as +
+ ++sqrt2 := math.Sqrt(2) ++ +
+without complaint from the compiler because the ideal number 2
+can be converted safely and accurately
+to a float64
for the call to math.Sqrt
.
+
+A blog post titled Constants +explores this topic in more detail. +
+ ++The same reason strings are: they are such a powerful and important data +structure that providing one excellent implementation with syntactic support +makes programming more pleasant. We believe that Go's implementation of maps +is strong enough that it will serve for the vast majority of uses. +If a specific application can benefit from a custom implementation, it's possible +to write one but it will not be as convenient syntactically; this seems a reasonable tradeoff. +
+ ++Map lookup requires an equality operator, which slices do not implement. +They don't implement equality because equality is not well defined on such types; +there are multiple considerations involving shallow vs. deep comparison, pointer vs. +value comparison, how to deal with recursive types, and so on. +We may revisit this issue—and implementing equality for slices +will not invalidate any existing programs—but without a clear idea of what +equality of slices should mean, it was simpler to leave it out for now. +
+ ++In Go 1, unlike prior releases, equality is defined for structs and arrays, so such +types can be used as map keys. Slices still do not have a definition of equality, though. +
+ ++There's a lot of history on that topic. Early on, maps and channels +were syntactically pointers and it was impossible to declare or use a +non-pointer instance. Also, we struggled with how arrays should work. +Eventually we decided that the strict separation of pointers and +values made the language harder to use. Changing these +types to act as references to the associated, shared data structures resolved +these issues. This change added some regrettable complexity to the +language but had a large effect on usability: Go became a more +productive, comfortable language when it was introduced. +
+ +
+There is a program, godoc
, written in Go, that extracts
+package documentation from the source code and serves it as a web
+page with links to declarations, files, and so on.
+An instance is running at
+golang.org/pkg/.
+In fact, godoc
implements the full site at
+golang.org/.
+
+A godoc
instance may be configured to provide rich,
+interactive static analyses of symbols in the programs it displays; details are
+listed here.
+
+For access to documentation from the command line, the +go tool has a +doc +subcommand that provides a textual interface to the same information. +
+ ++There is no explicit style guide, although there is certainly +a recognizable "Go style". +
+ +
+Go has established conventions to guide decisions around
+naming, layout, and file organization.
+The document Effective Go
+contains some advice on these topics.
+More directly, the program gofmt
is a pretty-printer
+whose purpose is to enforce layout rules; it replaces the usual
+compendium of do's and don'ts that allows interpretation.
+All the Go code in the repository, and the vast majority in the
+open source world, has been run through gofmt
.
+
+The document titled +Go Code Review Comments +is a collection of very short essays about details of Go idiom that are often +missed by programmers. +It is a handy reference for people doing code reviews for Go projects. +
+ +
+The library sources are in the src
directory of the repository.
+If you want to make a significant change, please discuss on the mailing list before embarking.
+
+See the document +Contributing to the Go project +for more information about how to proceed. +
+ +
+Companies often permit outgoing traffic only on the standard TCP ports 80 (HTTP)
+and 443 (HTTPS), blocking outgoing traffic on other ports, including TCP port 9418
+(git) and TCP port 22 (SSH).
+When using HTTPS instead of HTTP, git
enforces certificate validation by
+default, providing protection against man-in-the-middle, eavesdropping and tampering attacks.
+The go get
command therefore uses HTTPS for safety.
+
+Git
can be configured to authenticate over HTTPS or to use SSH in place of HTTPS.
+To authenticate over HTTPS, you can add a line
+to the $HOME/.netrc
file that git consults:
+
+machine github.com login USERNAME password APIKEY ++
+For GitHub accounts, the password can be a +personal access token. +
+ +
+Git
can also be configured to use SSH in place of HTTPS for URLs matching a given prefix.
+For example, to use SSH for all GitHub access,
+add these lines to your ~/.gitconfig
:
+
+[url "ssh://git@github.com/"] + insteadOf = https://github.com/ ++ +
+Since the inception of the project, Go has had no explicit concept of package versions, +but that is changing. +Versioning is a source of significant complexity, especially in large code bases, +and it has taken some time to develop an +approach that works well at scale in a large enough +variety of situations to be appropriate to supply to all Go users. +
+ +
+The Go 1.11 release adds new, experimental support
+for package versioning to the go
command,
+in the form of Go modules.
+For more information, see the Go 1.11 release notes
+and the go
command documentation.
+
+Regardless of the actual package management technology,
+"go get" and the larger Go toolchain does provide isolation of
+packages with different import paths.
+For example, the standard library's html/template
and text/template
+coexist even though both are "package template".
+This observation leads to some advice for package authors and package users.
+
+Packages intended for public use should try to maintain backwards compatibility as they evolve. +The Go 1 compatibility guidelines are a good reference here: +don't remove exported names, encourage tagged composite literals, and so on. +If different functionality is required, add a new name instead of changing an old one. +If a complete break is required, create a new package with a new import path. +
+ +
+If you're using an externally supplied package and worry that it might change in
+unexpected ways, but are not yet using Go modules,
+the simplest solution is to copy it to your local repository.
+This is the approach Google takes internally and is supported by the
+go
command through a technique called "vendoring".
+This involves
+storing a copy of the dependency under a new import path that identifies it as a local copy.
+See the design
+document for details.
+
+As in all languages in the C family, everything in Go is passed by value.
+That is, a function always gets a copy of the
+thing being passed, as if there were an assignment statement assigning the
+value to the parameter. For instance, passing an int
value
+to a function makes a copy of the int
, and passing a pointer
+value makes a copy of the pointer, but not the data it points to.
+(See a later
+section for a discussion of how this affects method receivers.)
+
+Map and slice values behave like pointers: they are descriptors that +contain pointers to the underlying map or slice data. Copying a map or +slice value doesn't copy the data it points to. Copying an interface value +makes a copy of the thing stored in the interface value. If the interface +value holds a struct, copying the interface value makes a copy of the +struct. If the interface value holds a pointer, copying the interface value +makes a copy of the pointer, but again not the data it points to. +
+ ++Note that this discussion is about the semantics of the operations. +Actual implementations may apply optimizations to avoid copying +as long as the optimizations do not change the semantics. +
+ ++Almost never. Pointers to interface values arise only in rare, tricky situations involving +disguising an interface value's type for delayed evaluation. +
+ ++It is a common mistake to pass a pointer to an interface value +to a function expecting an interface. The compiler will complain about this +error but the situation can still be confusing, because sometimes a +pointer +is necessary to satisfy an interface. +The insight is that although a pointer to a concrete type can satisfy +an interface, with one exception a pointer to an interface can never satisfy an interface. +
+ ++Consider the variable declaration, +
+ ++var w io.Writer ++ +
+The printing function fmt.Fprintf
takes as its first argument
+a value that satisfies io.Writer
—something that implements
+the canonical Write
method. Thus we can write
+
+fmt.Fprintf(w, "hello, world\n") ++ +
+If however we pass the address of w
, the program will not compile.
+
+fmt.Fprintf(&w, "hello, world\n") // Compile-time error. ++ +
+The one exception is that any value, even a pointer to an interface, can be assigned to
+a variable of empty interface type (interface{}
).
+Even so, it's almost certainly a mistake if the value is a pointer to an interface;
+the result can be confusing.
+
+func (s *MyStruct) pointerMethod() { } // method on pointer +func (s MyStruct) valueMethod() { } // method on value ++ +
+For programmers unaccustomed to pointers, the distinction between these
+two examples can be confusing, but the situation is actually very simple.
+When defining a method on a type, the receiver (s
in the above
+examples) behaves exactly as if it were an argument to the method.
+Whether to define the receiver as a value or as a pointer is the same
+question, then, as whether a function argument should be a value or
+a pointer.
+There are several considerations.
+
+First, and most important, does the method need to modify the
+receiver?
+If it does, the receiver must be a pointer.
+(Slices and maps act as references, so their story is a little
+more subtle, but for instance to change the length of a slice
+in a method the receiver must still be a pointer.)
+In the examples above, if pointerMethod
modifies
+the fields of s
,
+the caller will see those changes, but valueMethod
+is called with a copy of the caller's argument (that's the definition
+of passing a value), so changes it makes will be invisible to the caller.
+
+By the way, in Java method receivers are always pointers, +although their pointer nature is somewhat disguised +(and there is a proposal to add value receivers to the language). +It is the value receivers in Go that are unusual. +
+ +
+Second is the consideration of efficiency. If the receiver is large,
+a big struct
for instance, it will be much cheaper to
+use a pointer receiver.
+
+Next is consistency. If some of the methods of the type must have +pointer receivers, the rest should too, so the method set is +consistent regardless of how the type is used. +See the section on method sets +for details. +
+ +
+For types such as basic types, slices, and small structs
,
+a value receiver is very cheap so unless the semantics of the method
+requires a pointer, a value receiver is efficient and clear.
+
+In short: new
allocates memory, while make
initializes
+the slice, map, and channel types.
+
+See the relevant section +of Effective Go for more details. +
+ +int
on a 64 bit machine?
+The sizes of int
and uint
are implementation-specific
+but the same as each other on a given platform.
+For portability, code that relies on a particular
+size of value should use an explicitly sized type, like int64
.
+On 32-bit machines the compilers use 32-bit integers by default,
+while on 64-bit machines integers have 64 bits.
+(Historically, this was not always true.)
+
+On the other hand, floating-point scalars and complex
+types are always sized (there are no float
or complex
basic types),
+because programmers should be aware of precision when using floating-point numbers.
+The default type used for an (untyped) floating-point constant is float64
.
+Thus foo
:=
3.0
declares a variable foo
+of type float64
.
+For a float32
variable initialized by an (untyped) constant, the variable type
+must be specified explicitly in the variable declaration:
+
+var foo float32 = 3.0 ++ +
+Alternatively, the constant must be given a type with a conversion as in
+foo := float32(3.0)
.
+
+From a correctness standpoint, you don't need to know. +Each variable in Go exists as long as there are references to it. +The storage location chosen by the implementation is irrelevant to the +semantics of the language. +
+ ++The storage location does have an effect on writing efficient programs. +When possible, the Go compilers will allocate variables that are +local to a function in that function's stack frame. However, if the +compiler cannot prove that the variable is not referenced after the +function returns, then the compiler must allocate the variable on the +garbage-collected heap to avoid dangling pointer errors. +Also, if a local variable is very large, it might make more sense +to store it on the heap rather than the stack. +
+ ++In the current compilers, if a variable has its address taken, that variable +is a candidate for allocation on the heap. However, a basic escape +analysis recognizes some cases when such variables will not +live past the return from the function and can reside on the stack. +
+ ++The Go memory allocator reserves a large region of virtual memory as an arena +for allocations. This virtual memory is local to the specific Go process; the +reservation does not deprive other processes of memory. +
+ +
+To find the amount of actual memory allocated to a Go process, use the Unix
+top
command and consult the RES
(Linux) or
+RSIZE
(macOS) columns.
+
+
+A description of the atomicity of operations in Go can be found in +the Go Memory Model document. +
+ ++Low-level synchronization and atomic primitives are available in the +sync and +sync/atomic +packages. +These packages are good for simple tasks such as incrementing +reference counts or guaranteeing small-scale mutual exclusion. +
+ ++For higher-level operations, such as coordination among +concurrent servers, higher-level techniques can lead +to nicer programs, and Go supports this approach through +its goroutines and channels. +For instance, you can structure your program so that only one +goroutine at a time is ever responsible for a particular piece of data. +That approach is summarized by the original +Go proverb, +
+ ++Do not communicate by sharing memory. Instead, share memory by communicating. +
+ ++See the Share Memory By Communicating code walk +and its +associated article for a detailed discussion of this concept. +
+ ++Large concurrent programs are likely to borrow from both these toolkits. +
+ ++Whether a program runs faster with more CPUs depends on the problem +it is solving. +The Go language provides concurrency primitives, such as goroutines +and channels, but concurrency only enables parallelism +when the underlying problem is intrinsically parallel. +Problems that are intrinsically sequential cannot be sped up by adding +more CPUs, while those that can be broken into pieces that can +execute in parallel can be sped up, sometimes dramatically. +
+ ++Sometimes adding more CPUs can slow a program down. +In practical terms, programs that spend more time +synchronizing or communicating than doing useful computation +may experience performance degradation when using +multiple OS threads. +This is because passing data between threads involves switching +contexts, which has significant cost, and that cost can increase +with more CPUs. +For instance, the prime sieve example +from the Go specification has no significant parallelism although it launches many +goroutines; increasing the number of threads (CPUs) is more likely to slow it down than +to speed it up. +
+ ++For more detail on this topic see the talk entitled +Concurrency +is not Parallelism. + +
+The number of CPUs available simultaneously to executing goroutines is
+controlled by the GOMAXPROCS
shell environment variable,
+whose default value is the number of CPU cores available.
+Programs with the potential for parallel execution should therefore
+achieve it by default on a multiple-CPU machine.
+To change the number of parallel CPUs to use,
+set the environment variable or use the similarly-named
+function
+of the runtime package to configure the
+run-time support to utilize a different number of threads.
+Setting it to 1 eliminates the possibility of true parallelism,
+forcing independent goroutines to take turns executing.
+
+The runtime can allocate more threads than the value
+of GOMAXPROCS
to service multiple outstanding
+I/O requests.
+GOMAXPROCS
only affects how many goroutines
+can actually execute at once; arbitrarily more may be blocked
+in system calls.
+
+Go's goroutine scheduler is not as good as it needs to be, although it
+has improved over time.
+In the future, it may better optimize its use of OS threads.
+For now, if there are performance issues,
+setting GOMAXPROCS
on a per-application basis may help.
+
+Goroutines do not have names; they are just anonymous workers.
+They expose no unique identifier, name, or data structure to the programmer.
+Some people are surprised by this, expecting the go
+statement to return some item that can be used to access and control
+the goroutine later.
+
+The fundamental reason goroutines are anonymous is so that +the full Go language is available when programming concurrent code. +By contrast, the usage patterns that develop when threads and goroutines are +named can restrict what a library using them can do. +
+ +
+Here is an illustration of the difficulties.
+Once one names a goroutine and constructs a model around
+it, it becomes special, and one is tempted to associate all computation
+with that goroutine, ignoring the possibility
+of using multiple, possibly shared goroutines for the processing.
+If the net/http
package associated per-request
+state with a goroutine,
+clients would be unable to use more goroutines
+when serving a request.
+
+Moreover, experience with libraries such as those for graphics systems +that require all processing to occur on the "main thread" +has shown how awkward and limiting the approach can be when +deployed in a concurrent language. +The very existence of a special thread or goroutine forces +the programmer to distort the program to avoid crashes +and other problems caused by inadvertently operating +on the wrong thread. +
+ ++For those cases where a particular goroutine is truly special, +the language provides features such as channels that can be +used in flexible ways to interact with it. +
+ +
+As the Go specification says,
+the method set of a type T
consists of all methods
+with receiver type T
,
+while that of the corresponding pointer
+type *T
consists of all methods with receiver *T
or
+T
.
+That means the method set of *T
+includes that of T
,
+but not the reverse.
+
+This distinction arises because
+if an interface value contains a pointer *T
,
+a method call can obtain a value by dereferencing the pointer,
+but if an interface value contains a value T
,
+there is no safe way for a method call to obtain a pointer.
+(Doing so would allow a method to modify the contents of
+the value inside the interface, which is not permitted by
+the language specification.)
+
+Even in cases where the compiler could take the address of a value
+to pass to the method, if the method modifies the value the changes
+will be lost in the caller.
+As an example, if the Write
method of
+bytes.Buffer
+used a value receiver rather than a pointer,
+this code:
+
+var buf bytes.Buffer +io.Copy(buf, os.Stdin) ++ +
+would copy standard input into a copy of buf
,
+not into buf
itself.
+This is almost never the desired behavior.
+
+Some confusion may arise when using closures with concurrency. +Consider the following program: +
+ ++func main() { + done := make(chan bool) + + values := []string{"a", "b", "c"} + for _, v := range values { + go func() { + fmt.Println(v) + done <- true + }() + } + + // wait for all goroutines to complete before exiting + for _ = range values { + <-done + } +} ++ +
+One might mistakenly expect to see a, b, c
as the output.
+What you'll probably see instead is c, c, c
. This is because
+each iteration of the loop uses the same instance of the variable v
, so
+each closure shares that single variable. When the closure runs, it prints the
+value of v
at the time fmt.Println
is executed,
+but v
may have been modified since the goroutine was launched.
+To help detect this and other problems before they happen, run
+go vet
.
+
+To bind the current value of v
to each closure as it is launched, one
+must modify the inner loop to create a new variable each iteration.
+One way is to pass the variable as an argument to the closure:
+
+ for _, v := range values { + go func(u string) { + fmt.Println(u) + done <- true + }(v) + } ++ +
+In this example, the value of v
is passed as an argument to the
+anonymous function. That value is then accessible inside the function as
+the variable u
.
+
+Even easier is just to create a new variable, using a declaration style that may +seem odd but works fine in Go: +
+ ++ for _, v := range values { + v := v // create a new 'v'. + go func() { + fmt.Println(v) + done <- true + }() + } ++ +
+This behavior of the language, not defining a new variable for +each iteration, may have been a mistake in retrospect. +It may be addressed in a later version but, for compatibility, +cannot change in Go version 1. +
+ +?:
operator?+There is no ternary testing operation in Go. +You may use the following to achieve the same +result: +
+ ++if expr { + n = trueVal +} else { + n = falseVal +} ++ +
+The reason ?:
is absent from Go is that the language's designers
+had seen the operation used too often to create impenetrably complex expressions.
+The if-else
form, although longer,
+is unquestionably clearer.
+A language needs only one conditional control flow construct.
+
+Put all the source files for the package in a directory by themselves. +Source files can refer to items from different files at will; there is +no need for forward declarations or a header file. +
+ ++Other than being split into multiple files, the package will compile and test +just like a single-file package. +
+ +
+Create a new file ending in _test.go
in the same directory
+as your package sources. Inside that file, import "testing"
+and write functions of the form
+
+func TestFoo(t *testing.T) { + ... +} ++ +
+Run go test
in that directory.
+That script finds the Test
functions,
+builds a test binary, and runs it.
+
See the How to Write Go Code document,
+the testing
package
+and the go test
subcommand for more details.
+
+Go's standard testing
package makes it easy to write unit tests, but it lacks
+features provided in other language's testing frameworks such as assertion functions.
+An earlier section of this document explained why Go
+doesn't have assertions, and
+the same arguments apply to the use of assert
in tests.
+Proper error handling means letting other tests run after one has failed, so
+that the person debugging the failure gets a complete picture of what is
+wrong. It is more useful for a test to report that
+isPrime
gives the wrong answer for 2, 3, 5, and 7 (or for
+2, 4, 8, and 16) than to report that isPrime
gives the wrong
+answer for 2 and therefore no more tests were run. The programmer who
+triggers the test failure may not be familiar with the code that fails.
+Time invested writing a good error message now pays off later when the
+test breaks.
+
+A related point is that testing frameworks tend to develop into mini-languages +of their own, with conditionals and controls and printing mechanisms, +but Go already has all those capabilities; why recreate them? +We'd rather write tests in Go; it's one fewer language to learn and the +approach keeps the tests straightforward and easy to understand. +
+ +
+If the amount of extra code required to write
+good errors seems repetitive and overwhelming, the test might work better if
+table-driven, iterating over a list of inputs and outputs defined
+in a data structure (Go has excellent support for data structure literals).
+The work to write a good test and good error messages will then be amortized over many
+test cases. The standard Go library is full of illustrative examples, such as in
+the formatting tests for the fmt
package.
+
+The standard library's purpose is to support the runtime, connect to +the operating system, and provide key functionality that many Go +programs require, such as formatted I/O and networking. +It also contains elements important for web programming, including +cryptography and support for standards like HTTP, JSON, and XML. +
+ ++There is no clear criterion that defines what is included because for +a long time, this was the only Go library. +There are criteria that define what gets added today, however. +
+ ++New additions to the standard library are rare and the bar for +inclusion is high. +Code included in the standard library bears a large ongoing maintenance cost +(often borne by those other than the original author), +is subject to the Go 1 compatibility promise +(blocking fixes to any flaws in the API), +and is subject to the Go +release schedule, +preventing bug fixes from being available to users quickly. +
+ +
+Most new code should live outside of the standard library and be accessible
+via the go
tool's
+go get
command.
+Such code can have its own maintainers, release cycle,
+and compatibility guarantees.
+Users can find packages and read their documentation at
+godoc.org.
+
+Although there are pieces in the standard library that don't really belong,
+such as log/syslog
, we continue to maintain everything in the
+library because of the Go 1 compatibility promise.
+But we encourage most new code to live elsewhere.
+
+There are several production compilers for Go, and a number of others +in development for various platforms. +
+ +
+The default compiler, gc
, is included with the
+Go distribution as part of the support for the go
+command.
+Gc
was originally written in C
+because of the difficulties of bootstrapping—you'd need a Go compiler to
+set up a Go environment.
+But things have advanced and since the Go 1.5 release the compiler has been
+a Go program.
+The compiler was converted from C to Go using automatic translation tools, as
+described in this design document
+and talk.
+Thus the compiler is now "self-hosting", which means we needed to face
+the bootstrapping problem.
+The solution is to have a working Go installation already in place,
+just as one normally has with a working C installation.
+The story of how to bring up a new Go environment from source
+is described here and
+here.
+
+Gc
is written in Go with a recursive descent parser
+and uses a custom loader, also written in Go but
+based on the Plan 9 loader, to generate ELF/Mach-O/PE binaries.
+
+At the beginning of the project we considered using LLVM for
+gc
but decided it was too large and slow to meet
+our performance goals.
+More important in retrospect, starting with LLVM would have made it
+harder to introduce some of the ABI and related changes, such as
+stack management, that Go requires but are not part of the standard
+C setup.
+A new LLVM implementation
+is starting to come together now, however.
+
+The Gccgo
compiler is a front end written in C++
+with a recursive descent parser coupled to the
+standard GCC back end.
+
+Go turned out to be a fine language in which to implement a Go compiler, +although that was not its original goal. +Not being self-hosting from the beginning allowed Go's design to +concentrate on its original use case, which was networked servers. +Had we decided Go should compile itself early on, we might have +ended up with a language targeted more for compiler construction, +which is a worthy goal but not the one we had initially. +
+ +
+Although gc
does not use them (yet?), a native lexer and
+parser are available in the go
package
+and there is also a native type checker.
+
+Again due to bootstrapping issues, the run-time code was originally written mostly in C (with a
+tiny bit of assembler) but it has since been translated to Go
+(except for some assembler bits).
+Gccgo
's run-time support uses glibc
.
+The gccgo
compiler implements goroutines using
+a technique called segmented stacks,
+supported by recent modifications to the gold linker.
+Gollvm
similarly is built on the corresponding
+LLVM infrastructure.
+
+The linker in the gc
toolchain
+creates statically-linked binaries by default.
+All Go binaries therefore include the Go
+runtime, along with the run-time type information necessary to support dynamic
+type checks, reflection, and even panic-time stack traces.
+
+A simple C "hello, world" program compiled and linked statically using
+gcc on Linux is around 750 kB, including an implementation of
+printf
.
+An equivalent Go program using
+fmt.Printf
weighs a couple of megabytes, but that includes
+more powerful run-time support and type and debugging information.
+
+A Go program compiled with gc
can be linked with
+the -ldflags=-w
flag to disable DWARF generation,
+removing debugging information from the binary but with no
+other loss of functionality.
+This can reduce the binary size substantially.
+
+The presence of an unused variable may indicate a bug, while +unused imports just slow down compilation, +an effect that can become substantial as a program accumulates +code and programmers over time. +For these reasons, Go refuses to compile programs with unused +variables or imports, +trading short-term convenience for long-term build speed and +program clarity. +
+ ++Still, when developing code, it's common to create these situations +temporarily and it can be annoying to have to edit them out before the +program will compile. +
+ ++Some have asked for a compiler option to turn those checks off +or at least reduce them to warnings. +Such an option has not been added, though, +because compiler options should not affect the semantics of the +language and because the Go compiler does not report warnings, only +errors that prevent compilation. +
+ ++There are two reasons for having no warnings. First, if it's worth +complaining about, it's worth fixing in the code. (And if it's not +worth fixing, it's not worth mentioning.) Second, having the compiler +generate warnings encourages the implementation to warn about weak +cases that can make compilation noisy, masking real errors that +should be fixed. +
+ ++It's easy to address the situation, though. Use the blank identifier +to let unused things persist while you're developing. +
+ ++import "unused" + +// This declaration marks the import as used by referencing an +// item from the package. +var _ = unused.Item // TODO: Delete before committing! + +func main() { + debugData := debug.Profile() + _ = debugData // Used only during debugging. + .... +} ++ +
+Nowadays, most Go programmers use a tool, +goimports, +which automatically rewrites a Go source file to have the correct imports, +eliminating the unused imports issue in practice. +This program is easily connected to most editors to run automatically when a Go source file is written. +
+ ++This is a common occurrence, especially on Windows machines, and is almost always a false positive. +Commercial virus scanning programs are often confused by the structure of Go binaries, which +they don't see as often as those compiled from other languages. +
+ ++If you've just installed the Go distribution and the system reports it is infected, that's certainly a mistake. +To be really thorough, you can verify the download by comparing the checksum with those on the +downloads page. +
+ ++In any case, if you believe the report is in error, please report a bug to the supplier of your virus scanner. +Maybe in time virus scanners can learn to understand Go programs. +
+ ++One of Go's design goals is to approach the performance of C for comparable +programs, yet on some benchmarks it does quite poorly, including several +in golang.org/x/exp/shootout. +The slowest depend on libraries for which versions of comparable performance +are not available in Go. +For instance, pidigits.go +depends on a multi-precision math package, and the C +versions, unlike Go's, use GMP (which is +written in optimized assembler). +Benchmarks that depend on regular expressions +(regex-dna.go, +for instance) are essentially comparing Go's native regexp package to +mature, highly optimized regular expression libraries like PCRE. +
+ ++Benchmark games are won by extensive tuning and the Go versions of most +of the benchmarks need attention. If you measure comparable C +and Go programs +(reverse-complement.go +is one example), you'll see the two languages are much closer in raw performance +than this suite would indicate. +
+ ++Still, there is room for improvement. The compilers are good but could be +better, many libraries need major performance work, and the garbage collector +isn't fast enough yet. (Even if it were, taking care not to generate unnecessary +garbage can have a huge effect.) +
+ ++In any case, Go can often be very competitive. +There has been significant improvement in the performance of many programs +as the language and tools have developed. +See the blog post about +profiling +Go programs for an informative example. + +
+Other than declaration syntax, the differences are not major and stem +from two desires. First, the syntax should feel light, without too +many mandatory keywords, repetition, or arcana. Second, the language +has been designed to be easy to analyze +and can be parsed without a symbol table. This makes it much easier +to build tools such as debuggers, dependency analyzers, automated +documentation extractors, IDE plug-ins, and so on. C and its +descendants are notoriously difficult in this regard. +
+ +
+They're only backwards if you're used to C. In C, the notion is that a
+variable is declared like an expression denoting its type, which is a
+nice idea, but the type and expression grammars don't mix very well and
+the results can be confusing; consider function pointers. Go mostly
+separates expression and type syntax and that simplifies things (using
+prefix *
for pointers is an exception that proves the rule). In C,
+the declaration
+
+ int* a, b; ++
+declares a
to be a pointer but not b
; in Go
+
+ var a, b *int ++
+declares both to be pointers. This is clearer and more regular.
+Also, the :=
short declaration form argues that a full variable
+declaration should present the same order as :=
so
+
+ var a uint64 = 1 ++
+has the same effect as +
++ a := uint64(1) ++
+Parsing is also simplified by having a distinct grammar for types that
+is not just the expression grammar; keywords such as func
+and chan
keep things clear.
+
+See the article about +Go's Declaration Syntax +for more details. +
+ ++Safety. Without pointer arithmetic it's possible to create a +language that can never derive an illegal address that succeeds +incorrectly. Compiler and hardware technology have advanced to the +point where a loop using array indices can be as efficient as a loop +using pointer arithmetic. Also, the lack of pointer arithmetic can +simplify the implementation of the garbage collector. +
+ +++
and --
statements and not expressions? And why postfix, not prefix?
+Without pointer arithmetic, the convenience value of pre- and postfix
+increment operators drops. By removing them from the expression
+hierarchy altogether, expression syntax is simplified and the messy
+issues around order of evaluation of ++
and --
+(consider f(i++)
and p[i] = q[++i]
)
+are eliminated as well. The simplification is
+significant. As for postfix vs. prefix, either would work fine but
+the postfix version is more traditional; insistence on prefix arose
+with the STL, a library for a language whose name contains, ironically, a
+postfix increment.
+
+Go uses brace brackets for statement grouping, a syntax familiar to +programmers who have worked with any language in the C family. +Semicolons, however, are for parsers, not for people, and we wanted to +eliminate them as much as possible. To achieve this goal, Go borrows +a trick from BCPL: the semicolons that separate statements are in the +formal grammar but are injected automatically, without lookahead, by +the lexer at the end of any line that could be the end of a statement. +This works very well in practice but has the effect that it forces a +brace style. For instance, the opening brace of a function cannot +appear on a line by itself. +
+ +
+Some have argued that the lexer should do lookahead to permit the
+brace to live on the next line. We disagree. Since Go code is meant
+to be formatted automatically by
+gofmt
,
+some style must be chosen. That style may differ from what
+you've used in C or Java, but Go is a different language and
+gofmt
's style is as good as any other. More
+important—much more important—the advantages of a single,
+programmatically mandated format for all Go programs greatly outweigh
+any perceived disadvantages of the particular style.
+Note too that Go's style means that an interactive implementation of
+Go can use the standard syntax one line at a time without special rules.
+
+One of the biggest sources of bookkeeping in systems programs is +managing the lifetimes of allocated objects. +In languages such as C in which it is done manually, +it can consume a significant amount of programmer time and is +often the cause of pernicious bugs. +Even in languages like C++ or Rust that provide mechanisms +to assist, those mechanisms can have a significant effect on the +design of the software, often adding programming overhead +of its own. +We felt it was critical to eliminate such +programmer overheads, and advances in garbage collection +technology in the last few years gave us confidence that it +could be implemented cheaply enough, and with low enough +latency, that it could be a viable approach for networked +systems. +
+ ++Much of the difficulty of concurrent programming +has its roots in the object lifetime problem: +as objects get passed among threads it becomes cumbersome +to guarantee they become freed safely. +Automatic garbage collection makes concurrent code far easier to write. +Of course, implementing garbage collection in a concurrent environment is +itself a challenge, but meeting it once rather than in every +program helps everyone. +
+ ++Finally, concurrency aside, garbage collection makes interfaces +simpler because they don't need to specify how memory is managed across them. +
+ ++This is not to say that the recent work in languages +like Rust that bring new ideas to the problem of managing +resources is misguided; we encourage this work and are excited to see +how it evolves. +But Go takes a more traditional approach by addressing +object lifetimes through +garbage collection, and garbage collection alone. +
+ ++The current implementation is a mark-and-sweep collector. +If the machine is a multiprocessor, the collector runs on a separate CPU +core in parallel with the main program. +Major work on the collector in recent years has reduced pause times +often to the sub-millisecond range, even for large heaps, +all but eliminating one of the major objections to garbage collection +in networked servers. +Work continues to refine the algorithm, reduce overhead and +latency further, and to explore new approaches. +The 2018 +ISMM keynote +by Rick Hudson of the Go team +describes the progress so far and suggests some future approaches. +
+ ++On the topic of performance, keep in mind that Go gives the programmer +considerable control over memory layout and allocation, much more than +is typical in garbage-collected languages. A careful programmer can reduce +the garbage collection overhead dramatically by using the language well; +see the article about +profiling +Go programs for a worked example, including a demonstration of Go's +profiling tools. +
diff --git a/_content/doc/gopher/README b/_content/doc/gopher/README new file mode 100644 index 00000000..d4ca8a1c --- /dev/null +++ b/_content/doc/gopher/README @@ -0,0 +1,3 @@ +The Go gopher was designed by Renee French. (http://reneefrench.blogspot.com/) +The design is licensed under the Creative Commons 3.0 Attributions license. +Read this article for more details: https://blog.golang.org/gopher diff --git a/_content/doc/gopher/appenginegopher.jpg b/_content/doc/gopher/appenginegopher.jpg new file mode 100644 index 00000000..0a643066 Binary files /dev/null and b/_content/doc/gopher/appenginegopher.jpg differ diff --git a/_content/doc/gopher/appenginegophercolor.jpg b/_content/doc/gopher/appenginegophercolor.jpg new file mode 100644 index 00000000..68795a99 Binary files /dev/null and b/_content/doc/gopher/appenginegophercolor.jpg differ diff --git a/_content/doc/gopher/appenginelogo.gif b/_content/doc/gopher/appenginelogo.gif new file mode 100644 index 00000000..46b3c1ee Binary files /dev/null and b/_content/doc/gopher/appenginelogo.gif differ diff --git a/_content/doc/gopher/biplane.jpg b/_content/doc/gopher/biplane.jpg new file mode 100644 index 00000000..d5e666f9 Binary files /dev/null and b/_content/doc/gopher/biplane.jpg differ diff --git a/_content/doc/gopher/bumper.png b/_content/doc/gopher/bumper.png new file mode 100644 index 00000000..b357cdf4 Binary files /dev/null and b/_content/doc/gopher/bumper.png differ diff --git a/_content/doc/gopher/bumper192x108.png b/_content/doc/gopher/bumper192x108.png new file mode 100644 index 00000000..925474e7 Binary files /dev/null and b/_content/doc/gopher/bumper192x108.png differ diff --git a/_content/doc/gopher/bumper320x180.png b/_content/doc/gopher/bumper320x180.png new file mode 100644 index 00000000..611c417c Binary files /dev/null and b/_content/doc/gopher/bumper320x180.png differ diff --git a/_content/doc/gopher/bumper480x270.png b/_content/doc/gopher/bumper480x270.png new file mode 100644 index 00000000..cf187151 Binary files /dev/null and b/_content/doc/gopher/bumper480x270.png differ diff --git a/_content/doc/gopher/bumper640x360.png b/_content/doc/gopher/bumper640x360.png new file mode 100644 index 00000000..a5073e0d Binary files /dev/null and b/_content/doc/gopher/bumper640x360.png differ diff --git a/_content/doc/gopher/doc.png b/_content/doc/gopher/doc.png new file mode 100644 index 00000000..e15a3234 Binary files /dev/null and b/_content/doc/gopher/doc.png differ diff --git a/_content/doc/gopher/favicon.svg b/_content/doc/gopher/favicon.svg new file mode 100644 index 00000000..e5a68fe2 --- /dev/null +++ b/_content/doc/gopher/favicon.svg @@ -0,0 +1,238 @@ + + + + diff --git a/_content/doc/gopher/fiveyears.jpg b/_content/doc/gopher/fiveyears.jpg new file mode 100644 index 00000000..df106486 Binary files /dev/null and b/_content/doc/gopher/fiveyears.jpg differ diff --git a/_content/doc/gopher/frontpage.png b/_content/doc/gopher/frontpage.png new file mode 100644 index 00000000..1eb81f0b Binary files /dev/null and b/_content/doc/gopher/frontpage.png differ diff --git a/_content/doc/gopher/gopherbw.png b/_content/doc/gopher/gopherbw.png new file mode 100644 index 00000000..3bfe85dc Binary files /dev/null and b/_content/doc/gopher/gopherbw.png differ diff --git a/_content/doc/gopher/gophercolor.png b/_content/doc/gopher/gophercolor.png new file mode 100644 index 00000000..b5f8d01f Binary files /dev/null and b/_content/doc/gopher/gophercolor.png differ diff --git a/_content/doc/gopher/gophercolor16x16.png b/_content/doc/gopher/gophercolor16x16.png new file mode 100644 index 00000000..ec7028cc Binary files /dev/null and b/_content/doc/gopher/gophercolor16x16.png differ diff --git a/_content/doc/gopher/help.png b/_content/doc/gopher/help.png new file mode 100644 index 00000000..6ee52389 Binary files /dev/null and b/_content/doc/gopher/help.png differ diff --git a/_content/doc/gopher/modelsheet.jpg b/_content/doc/gopher/modelsheet.jpg new file mode 100644 index 00000000..c31e35a6 Binary files /dev/null and b/_content/doc/gopher/modelsheet.jpg differ diff --git a/_content/doc/gopher/pencil/gopherhat.jpg b/_content/doc/gopher/pencil/gopherhat.jpg new file mode 100644 index 00000000..f34d7b32 Binary files /dev/null and b/_content/doc/gopher/pencil/gopherhat.jpg differ diff --git a/_content/doc/gopher/pencil/gopherhelmet.jpg b/_content/doc/gopher/pencil/gopherhelmet.jpg new file mode 100644 index 00000000..c7b6c61b Binary files /dev/null and b/_content/doc/gopher/pencil/gopherhelmet.jpg differ diff --git a/_content/doc/gopher/pencil/gophermega.jpg b/_content/doc/gopher/pencil/gophermega.jpg new file mode 100644 index 00000000..779fb073 Binary files /dev/null and b/_content/doc/gopher/pencil/gophermega.jpg differ diff --git a/_content/doc/gopher/pencil/gopherrunning.jpg b/_content/doc/gopher/pencil/gopherrunning.jpg new file mode 100644 index 00000000..eeeddf10 Binary files /dev/null and b/_content/doc/gopher/pencil/gopherrunning.jpg differ diff --git a/_content/doc/gopher/pencil/gopherswim.jpg b/_content/doc/gopher/pencil/gopherswim.jpg new file mode 100644 index 00000000..2f328771 Binary files /dev/null and b/_content/doc/gopher/pencil/gopherswim.jpg differ diff --git a/_content/doc/gopher/pencil/gopherswrench.jpg b/_content/doc/gopher/pencil/gopherswrench.jpg new file mode 100644 index 00000000..93005f42 Binary files /dev/null and b/_content/doc/gopher/pencil/gopherswrench.jpg differ diff --git a/_content/doc/gopher/pkg.png b/_content/doc/gopher/pkg.png new file mode 100644 index 00000000..ac96551b Binary files /dev/null and b/_content/doc/gopher/pkg.png differ diff --git a/_content/doc/gopher/project.png b/_content/doc/gopher/project.png new file mode 100644 index 00000000..24603f30 Binary files /dev/null and b/_content/doc/gopher/project.png differ diff --git a/_content/doc/gopher/ref.png b/_content/doc/gopher/ref.png new file mode 100644 index 00000000..0508f6ec Binary files /dev/null and b/_content/doc/gopher/ref.png differ diff --git a/_content/doc/gopher/run.png b/_content/doc/gopher/run.png new file mode 100644 index 00000000..eb690e3f Binary files /dev/null and b/_content/doc/gopher/run.png differ diff --git a/_content/doc/gopher/talks.png b/_content/doc/gopher/talks.png new file mode 100644 index 00000000..589db470 Binary files /dev/null and b/_content/doc/gopher/talks.png differ diff --git a/_content/doc/help.html b/_content/doc/help.html new file mode 100644 index 00000000..3d32ae5d --- /dev/null +++ b/_content/doc/help.html @@ -0,0 +1,96 @@ + + + + ++Get help from Go users, and share your work on the official mailing list. +
++Search the golang-nuts +archives and consult the FAQ and +wiki before posting. +
+ ++The Go Forum is a discussion +forum for Go programmers. +
+ ++Get live support and talk with other gophers on the Go Discord. +
+ +Get live support from other users in the Go slack channel.
+ +Get live support at #go-nuts on irc.freenode.net, the official +Go IRC channel.
+{{end}} + +Answers to common questions about Go.
+ +{{if not $.GoogleCN}} ++Subscribe to +golang-announce +for important announcements, such as the availability of new Go releases. +
+ +The Go project's official blog.
+ +The Go project's official Twitter account.
+ ++The golang sub-Reddit is a place +for Go news and discussion. +
+ ++The Go Time podcast is a panel of Go experts and special guests +discussing the Go programming language, the community, and everything in between. +
+{{end}} + ++Each month in places around the world, groups of Go programmers ("gophers") +meet to talk about Go. Find a chapter near you. +
+ +{{if not $.GoogleCN}} +A place to write, run, and share Go code.
+ +A wiki maintained by the Go community.
+{{end}} + ++Guidelines for participating in Go community spaces +and a reporting process for handling issues. +
+ diff --git a/_content/doc/ie.css b/_content/doc/ie.css new file mode 100644 index 00000000..bb89d54b --- /dev/null +++ b/_content/doc/ie.css @@ -0,0 +1 @@ +#nav-main li { display: inline; } diff --git a/_content/doc/play/fib.go b/_content/doc/play/fib.go new file mode 100644 index 00000000..19e47210 --- /dev/null +++ b/_content/doc/play/fib.go @@ -0,0 +1,19 @@ +package main + +import "fmt" + +// fib returns a function that returns +// successive Fibonacci numbers. +func fib() func() int { + a, b := 0, 1 + return func() int { + a, b = b, a+b + return a + } +} + +func main() { + f := fib() + // Function calls are evaluated left-to-right. + fmt.Println(f(), f(), f(), f(), f()) +} diff --git a/_content/doc/play/hello.go b/_content/doc/play/hello.go new file mode 100644 index 00000000..3badf125 --- /dev/null +++ b/_content/doc/play/hello.go @@ -0,0 +1,9 @@ +// You can edit this code! +// Click here and start typing. +package main + +import "fmt" + +func main() { + fmt.Println("Hello, 世界") +} diff --git a/_content/doc/play/life.go b/_content/doc/play/life.go new file mode 100644 index 00000000..51afb61f --- /dev/null +++ b/_content/doc/play/life.go @@ -0,0 +1,113 @@ +// An implementation of Conway's Game of Life. +package main + +import ( + "bytes" + "fmt" + "math/rand" + "time" +) + +// Field represents a two-dimensional field of cells. +type Field struct { + s [][]bool + w, h int +} + +// NewField returns an empty field of the specified width and height. +func NewField(w, h int) *Field { + s := make([][]bool, h) + for i := range s { + s[i] = make([]bool, w) + } + return &Field{s: s, w: w, h: h} +} + +// Set sets the state of the specified cell to the given value. +func (f *Field) Set(x, y int, b bool) { + f.s[y][x] = b +} + +// Alive reports whether the specified cell is alive. +// If the x or y coordinates are outside the field boundaries they are wrapped +// toroidally. For instance, an x value of -1 is treated as width-1. +func (f *Field) Alive(x, y int) bool { + x += f.w + x %= f.w + y += f.h + y %= f.h + return f.s[y][x] +} + +// Next returns the state of the specified cell at the next time step. +func (f *Field) Next(x, y int) bool { + // Count the adjacent cells that are alive. + alive := 0 + for i := -1; i <= 1; i++ { + for j := -1; j <= 1; j++ { + if (j != 0 || i != 0) && f.Alive(x+i, y+j) { + alive++ + } + } + } + // Return next state according to the game rules: + // exactly 3 neighbors: on, + // exactly 2 neighbors: maintain current state, + // otherwise: off. + return alive == 3 || alive == 2 && f.Alive(x, y) +} + +// Life stores the state of a round of Conway's Game of Life. +type Life struct { + a, b *Field + w, h int +} + +// NewLife returns a new Life game state with a random initial state. +func NewLife(w, h int) *Life { + a := NewField(w, h) + for i := 0; i < (w * h / 4); i++ { + a.Set(rand.Intn(w), rand.Intn(h), true) + } + return &Life{ + a: a, b: NewField(w, h), + w: w, h: h, + } +} + +// Step advances the game by one instant, recomputing and updating all cells. +func (l *Life) Step() { + // Update the state of the next field (b) from the current field (a). + for y := 0; y < l.h; y++ { + for x := 0; x < l.w; x++ { + l.b.Set(x, y, l.a.Next(x, y)) + } + } + // Swap fields a and b. + l.a, l.b = l.b, l.a +} + +// String returns the game board as a string. +func (l *Life) String() string { + var buf bytes.Buffer + for y := 0; y < l.h; y++ { + for x := 0; x < l.w; x++ { + b := byte(' ') + if l.a.Alive(x, y) { + b = '*' + } + buf.WriteByte(b) + } + buf.WriteByte('\n') + } + return buf.String() +} + +func main() { + l := NewLife(40, 15) + for i := 0; i < 300; i++ { + l.Step() + fmt.Print("\x0c", l) // Clear screen and print field. + time.Sleep(time.Second / 30) + } +} diff --git a/_content/doc/play/peano.go b/_content/doc/play/peano.go new file mode 100644 index 00000000..214fe1b6 --- /dev/null +++ b/_content/doc/play/peano.go @@ -0,0 +1,88 @@ +// Peano integers are represented by a linked +// list whose nodes contain no data +// (the nodes are the data). +// http://en.wikipedia.org/wiki/Peano_axioms + +// This program demonstrates that Go's automatic +// stack management can handle heavily recursive +// computations. + +package main + +import "fmt" + +// Number is a pointer to a Number +type Number *Number + +// The arithmetic value of a Number is the +// count of the nodes comprising the list. +// (See the count function below.) + +// ------------------------------------- +// Peano primitives + +func zero() *Number { + return nil +} + +func isZero(x *Number) bool { + return x == nil +} + +func add1(x *Number) *Number { + e := new(Number) + *e = x + return e +} + +func sub1(x *Number) *Number { + return *x +} + +func add(x, y *Number) *Number { + if isZero(y) { + return x + } + return add(add1(x), sub1(y)) +} + +func mul(x, y *Number) *Number { + if isZero(x) || isZero(y) { + return zero() + } + return add(mul(x, sub1(y)), x) +} + +func fact(n *Number) *Number { + if isZero(n) { + return add1(zero()) + } + return mul(fact(sub1(n)), n) +} + +// ------------------------------------- +// Helpers to generate/count Peano integers + +func gen(n int) *Number { + if n > 0 { + return add1(gen(n - 1)) + } + return zero() +} + +func count(x *Number) int { + if isZero(x) { + return 0 + } + return count(sub1(x)) + 1 +} + +// ------------------------------------- +// Print i! for i in [0,9] + +func main() { + for i := 0; i <= 9; i++ { + f := count(fact(gen(i))) + fmt.Println(i, "! =", f) + } +} diff --git a/_content/doc/play/pi.go b/_content/doc/play/pi.go new file mode 100644 index 00000000..f61884e8 --- /dev/null +++ b/_content/doc/play/pi.go @@ -0,0 +1,34 @@ +// Concurrent computation of pi. +// See https://goo.gl/la6Kli. +// +// This demonstrates Go's ability to handle +// large numbers of concurrent processes. +// It is an unreasonable way to calculate pi. +package main + +import ( + "fmt" + "math" +) + +func main() { + fmt.Println(pi(5000)) +} + +// pi launches n goroutines to compute an +// approximation of pi. +func pi(n int) float64 { + ch := make(chan float64) + for k := 0; k <= n; k++ { + go term(ch, float64(k)) + } + f := 0.0 + for k := 0; k <= n; k++ { + f += <-ch + } + return f +} + +func term(ch chan float64, k float64) { + ch <- 4 * math.Pow(-1, k) / (2*k + 1) +} diff --git a/_content/doc/play/sieve.go b/_content/doc/play/sieve.go new file mode 100644 index 00000000..51909345 --- /dev/null +++ b/_content/doc/play/sieve.go @@ -0,0 +1,36 @@ +// A concurrent prime sieve + +package main + +import "fmt" + +// Send the sequence 2, 3, 4, ... to channel 'ch'. +func Generate(ch chan<- int) { + for i := 2; ; i++ { + ch <- i // Send 'i' to channel 'ch'. + } +} + +// Copy the values from channel 'in' to channel 'out', +// removing those divisible by 'prime'. +func Filter(in <-chan int, out chan<- int, prime int) { + for { + i := <-in // Receive value from 'in'. + if i%prime != 0 { + out <- i // Send 'i' to 'out'. + } + } +} + +// The prime sieve: Daisy-chain Filter processes. +func main() { + ch := make(chan int) // Create a new channel. + go Generate(ch) // Launch Generate goroutine. + for i := 0; i < 10; i++ { + prime := <-ch + fmt.Println(prime) + ch1 := make(chan int) + go Filter(ch, ch1, prime) + ch = ch1 + } +} diff --git a/_content/doc/play/solitaire.go b/_content/doc/play/solitaire.go new file mode 100644 index 00000000..15022aa1 --- /dev/null +++ b/_content/doc/play/solitaire.go @@ -0,0 +1,117 @@ +// This program solves the (English) peg +// solitaire board game. +// http://en.wikipedia.org/wiki/Peg_solitaire + +package main + +import "fmt" + +const N = 11 + 1 // length of a row (+1 for \n) + +// The board must be surrounded by 2 illegal +// fields in each direction so that move() +// doesn't need to check the board boundaries. +// Periods represent illegal fields, +// ● are pegs, and ○ are holes. + +var board = []rune( + `........... +........... +....●●●.... +....●●●.... +..●●●●●●●.. +..●●●○●●●.. +..●●●●●●●.. +....●●●.... +....●●●.... +........... +........... +`) + +// center is the position of the center hole if +// there is a single one; otherwise it is -1. +var center int + +func init() { + n := 0 + for pos, field := range board { + if field == '○' { + center = pos + n++ + } + } + if n != 1 { + center = -1 // no single hole + } +} + +var moves int // number of times move is called + +// move tests if there is a peg at position pos that +// can jump over another peg in direction dir. If the +// move is valid, it is executed and move returns true. +// Otherwise, move returns false. +func move(pos, dir int) bool { + moves++ + if board[pos] == '●' && board[pos+dir] == '●' && board[pos+2*dir] == '○' { + board[pos] = '○' + board[pos+dir] = '○' + board[pos+2*dir] = '●' + return true + } + return false +} + +// unmove reverts a previously executed valid move. +func unmove(pos, dir int) { + board[pos] = '●' + board[pos+dir] = '●' + board[pos+2*dir] = '○' +} + +// solve tries to find a sequence of moves such that +// there is only one peg left at the end; if center is +// >= 0, that last peg must be in the center position. +// If a solution is found, solve prints the board after +// each move in a backward fashion (i.e., the last +// board position is printed first, all the way back to +// the starting board position). +func solve() bool { + var last, n int + for pos, field := range board { + // try each board position + if field == '●' { + // found a peg + for _, dir := range [...]int{-1, -N, +1, +N} { + // try each direction + if move(pos, dir) { + // a valid move was found and executed, + // see if this new board has a solution + if solve() { + unmove(pos, dir) + fmt.Println(string(board)) + return true + } + unmove(pos, dir) + } + } + last = pos + n++ + } + } + // tried each possible move + if n == 1 && (center < 0 || last == center) { + // there's only one peg left + fmt.Println(string(board)) + return true + } + // no solution found for this board + return false +} + +func main() { + if !solve() { + fmt.Println("no solution found") + } + fmt.Println(moves, "moves tried") +} diff --git a/_content/doc/play/tree.go b/_content/doc/play/tree.go new file mode 100644 index 00000000..3790e6cd --- /dev/null +++ b/_content/doc/play/tree.go @@ -0,0 +1,100 @@ +// Go's concurrency primitives make it easy to +// express concurrent concepts, such as +// this binary tree comparison. +// +// Trees may be of different shapes, +// but have the same contents. For example: +// +// 4 6 +// 2 6 4 7 +// 1 3 5 7 2 5 +// 1 3 +// +// This program compares a pair of trees by +// walking each in its own goroutine, +// sending their contents through a channel +// to a third goroutine that compares them. + +package main + +import ( + "fmt" + "math/rand" +) + +// A Tree is a binary tree with integer values. +type Tree struct { + Left *Tree + Value int + Right *Tree +} + +// Walk traverses a tree depth-first, +// sending each Value on a channel. +func Walk(t *Tree, ch chan int) { + if t == nil { + return + } + Walk(t.Left, ch) + ch <- t.Value + Walk(t.Right, ch) +} + +// Walker launches Walk in a new goroutine, +// and returns a read-only channel of values. +func Walker(t *Tree) <-chan int { + ch := make(chan int) + go func() { + Walk(t, ch) + close(ch) + }() + return ch +} + +// Compare reads values from two Walkers +// that run simultaneously, and returns true +// if t1 and t2 have the same contents. +func Compare(t1, t2 *Tree) bool { + c1, c2 := Walker(t1), Walker(t2) + for { + v1, ok1 := <-c1 + v2, ok2 := <-c2 + if !ok1 || !ok2 { + return ok1 == ok2 + } + if v1 != v2 { + break + } + } + return false +} + +// New returns a new, random binary tree +// holding the values 1k, 2k, ..., nk. +func New(n, k int) *Tree { + var t *Tree + for _, v := range rand.Perm(n) { + t = insert(t, (1+v)*k) + } + return t +} + +func insert(t *Tree, v int) *Tree { + if t == nil { + return &Tree{nil, v, nil} + } + if v < t.Value { + t.Left = insert(t.Left, v) + return t + } + t.Right = insert(t.Right, v) + return t +} + +func main() { + t1 := New(100, 1) + fmt.Println(Compare(t1, New(100, 1)), "Same Contents") + fmt.Println(Compare(t1, New(99, 1)), "Differing Sizes") + fmt.Println(Compare(t1, New(100, 2)), "Differing Values") + fmt.Println(Compare(t1, New(101, 2)), "Dissimilar") +} diff --git a/_content/doc/progs/cgo1.go b/_content/doc/progs/cgo1.go new file mode 100644 index 00000000..d559e139 --- /dev/null +++ b/_content/doc/progs/cgo1.go @@ -0,0 +1,22 @@ +// Copyright 2012 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package rand + +/* +#include+The Go website (the "Website") is hosted by Google. +By using and/or visiting the Website, you consent to be bound by Google's general +Terms of Service +and Google's general +Privacy Policy. +