Go standards and style guidelines
This document describes various guidelines and best practices for GitLab projects using the Go language.
Overview
GitLab is built on top of Ruby on Rails, but we're also using Go for projects where it makes sense. Go is a very powerful language, with many advantages, and is best suited for projects with a lot of IO (disk/network access), HTTP requests, parallel processing, etc. Since we have both Ruby on Rails and Go at GitLab, we should evaluate carefully which of the two is best for the job.
This page aims to define and organize our Go guidelines, based on our various
experiences. Several projects were started with different standards and they
can still have specifics. They are described in their respective
README.md
or PROCESS.md
files.
Dependency Management
Go uses a source-based strategy for dependency management. Dependencies are downloaded as source from their source repository. This differs from the more common artifact-based strategy where dependencies are downloaded as artifacts from a package repository that is separate from the dependency's source repository.
Go did not have first-class support for version management prior to 1.11. That version introduced Go modules and the use of semantic versioning. Go 1.12 introduced module proxies, which can serve as an intermediate between clients and source version control systems, and checksum databases, which can be used to verify the integrity of dependency downloads.
See Dependency Management in Go for more details.
Code Review
We follow the common principles of Go Code Review Comments.
Reviewers and maintainers should pay attention to:
-
defer
functions: ensure the presence when needed, and aftererr
check. - Inject dependencies as parameters.
- Void structs when marshaling to JSON (generates
null
instead of[]
).
Security
Security is our top priority at GitLab. During code reviews, we must take care of possible security breaches in our code:
- XSS when using text/template
- CSRF Protection using Gorilla
- Use a Go version without known vulnerabilities
- Don't leak secret tokens
- SQL injections
Remember to run
SAST and Dependency Scanning
(ULTIMATE) on your project (or at least the
gosec
analyzer),
and to follow our Security requirements.
Web servers can take advantages of middlewares like Secure.
Finding a reviewer
Many of our projects are too small to have full-time maintainers. That's why we have a shared pool of Go reviewers at GitLab. To find a reviewer, use the "Go" section of the "GitLab" project on the Engineering Projects page in the handbook.
To add yourself to this list, add the following to your profile in the team.yml file and ask your manager to review and merge.
projects:
gitlab: reviewer go
Code style and format
-
Avoid global variables, even in packages. By doing so you introduce side effects if the package is included multiple times.
-
Use
goimports
before committing.goimports
is a tool that automatically formats Go source code usingGofmt
, in addition to formatting import lines, adding missing ones and removing unreferenced ones.Most editors/IDEs allow you to run commands before/after saving a file, you can set it up to run
goimports
so that it's applied to every file when saving. -
Place private methods below the first caller method in the source file.
Automatic linting
All Go projects should include these GitLab CI/CD jobs:
lint:
image: registry.gitlab.com/gitlab-org/gitlab-build-images:golangci-lint-alpine
stage: test
script:
# Use default .golangci.yml file from the image if one is not present in the project root.
- '[ -e .golangci.yml ] || cp /golangci/.golangci.yml .'
# Write the code coverage report to gl-code-quality-report.json
# and print linting issues to stdout in the format: path/to/file:line description
# remove `--issues-exit-code 0` or set to non-zero to fail the job if linting issues are detected
- golangci-lint run --issues-exit-code 0 --out-format code-climate | tee gl-code-quality-report.json | jq -r '.[] | "\(.location.path):\(.location.lines.begin) \(.description)"'
artifacts:
reports:
codequality: gl-code-quality-report.json
paths:
- gl-code-quality-report.json
Including a .golangci.yml
in the root directory of the project allows for
configuration of golangci-lint
. All options for golangci-lint
are listed in
this example.
Once recursive includes become available, you can share job templates like this analyzer.
Go GitLab linter plugins are maintained in the gitlab-org/language-tools/go/linters
namespace.
Dependencies
Dependencies should be kept to the minimum. The introduction of a new dependency should be argued in the merge request, as per our Approval Guidelines. Both License Scanning (ULTIMATE) and Dependency Scanning (ULTIMATE) should be activated on all projects to ensure new dependencies security status and license compatibility.
Modules
In Go 1.11 and later, a standard dependency system is available behind the name Go Modules. It provides a way to define and lock dependencies for reproducible builds. It should be used whenever possible.
When Go Modules are in use, there should not be a vendor/
directory. Instead,
Go automatically downloads dependencies when they are needed to build the
project. This is in line with how dependencies are handled with Bundler in Ruby
projects, and makes merge requests easier to review.
In some cases, such as building a Go project for it to act as a dependency of a
CI run for another project, removing the vendor/
directory means the code must
be downloaded repeatedly, which can lead to intermittent problems due to rate
limiting or network failures. In these circumstances, you should cache the
downloaded code between.
There was a
bug on modules checksums in Go versions earlier than v1.11.4, so make
sure to use at least this version to avoid checksum mismatch
errors.
ORM
We don't use object-relational mapping libraries (ORMs) at GitLab (except
ActiveRecord in
Ruby on Rails). Projects can be structured with services to avoid them.
pgx
should be enough to interact with PostgreSQL
databases.
Migrations
In the rare event of managing a hosted database, it's necessary to use a
migration system like ActiveRecord is providing. A simple library like
Journey, designed to be used in
postgres
containers, can be deployed as long-running pods. New versions
deploy a new pod, migrating the data automatically.
Testing
Testing frameworks
We should not use any specific library or framework for testing, as the standard library provides already everything to get started. If there is a need for more sophisticated testing tools, the following external dependencies might be worth considering in case we decide to use a specific library or framework:
Subtests
Use subtests whenever possible to improve code readability and test output.
Better output in tests
When comparing expected and actual values in tests, use
testify/require.Equal
,
testify/require.EqualError
,
testify/require.EqualValues
,
and others to improve readability when comparing structs, errors,
large portions of text, or JSON documents:
type TestData struct {
// ...
}
func FuncUnderTest() TestData {
// ...
}
func Test(t *testing.T) {
t.Run("FuncUnderTest", func(t *testing.T) {
want := TestData{}
got := FuncUnderTest()
require.Equal(t, want, got) // note that expected value comes first, then comes the actual one ("diff" semantics)
})
}
Table-Driven Tests
Using Table-Driven Tests is generally good practice when you have multiple entries of inputs/outputs for the same function. Below are some guidelines one can follow when writing table-driven test. These guidelines are mostly extracted from Go standard library source code. Keep in mind it's OK not to follow these guidelines when it makes sense.
Defining test cases
Each table entry is a complete test case with inputs and expected results, and sometimes with additional information such as a test name to make the test output easily readable.
- Define a slice of anonymous struct inside of the test.
- Define a slice of anonymous struct outside of the test.
- Named structs for code reuse.
-
Using
map[string]struct{}
.
Contents of the test case
- Ideally, each test case should have a field with a unique identifier
to use for naming subtests. In the Go standard library, this is commonly the
name string
field. - Use
want
/expect
/actual
when you are specifying something in the test case that is used for assertion.
Variable names
- Each table-driven test map/slice of struct can be named
tests
. - When looping through
tests
the anonymous struct can be referred to astt
ortc
. - The description of the test can be referred to as
name
/testName
/tn
.
Benchmarks
Programs handling a lot of IO or complex operations should always include benchmarks, to ensure performance consistency over time.
Error handling
Adding context
Adding context before you return the error can be helpful, instead of just returning the error. This allows developers to understand what the program was trying to do when it entered the error state making it much easier to debug.
For example:
// Wrap the error
return nil, fmt.Errorf("get cache %s: %w", f.Name, err)
// Just add context
return nil, fmt.Errorf("saving cache %s: %v", f.Name, err)
A few things to keep in mind when adding context:
- Decide if you want to expose the underlying error
to the caller. If so, use
%w
, if not, you can use%v
. - Don't use words like
failed
,error
,didn't
. As it's an error, the user already knows that something failed and this might lead to having strings likefailed xx failed xx failed xx
. Explain what failed instead. - Error strings should not be capitalized or end with punctuation or a
newline. You can use
golint
to check for this.
Naming
- When using sentinel errors they should always be named like
ErrXxx
. - When creating a new error type they should always be named like
XxxError
.
Checking Error types
- To check error equality don't use
==
. Useerrors.Is
instead (for Go versions >= 1.13). - To check if the error is of a certain type don't use type assertion,
use
errors.As
instead (for Go versions >= 1.13).
References for working with errors
CLIs
Every Go program is launched from the command line.
cli
is a convenient package to create command
line apps. It should be used whether the project is a daemon or a simple CLI
tool. Flags can be mapped to environment
variables directly,
which documents and centralizes at the same time all the possible command line
interactions with the program. Don't use os.GetEnv
, it hides variables deep
in the code.
Daemons
Logging
The usage of a logging library is strongly recommended for daemons. Even
though there is a log
package in the standard library, we generally use
Logrus. Its plugin ("hooks") system
makes it a powerful logging library, with the ability to add notifiers and
formatters at the logger level directly.
Structured (JSON) logging
Every binary ideally must have structured (JSON) logging in place as it helps with searching and filtering the logs. At GitLab we use structured logging in JSON format, as all our infrastructure assumes that. When using Logrus you can turn on structured logging simply by using the build in JSON formatter. This follows the same logging type we use in our Ruby applications.
How to use Logrus
There are a few guidelines one should follow when using the Logrus package:
- When printing an error use
WithError. For
example,
logrus.WithError(err).Error("Failed to do something")
. - Since we use structured logging we can log
fields in the context of that code path, such as the URI of the request using
WithField
orWithFields
. For example,logrus.WithField("file", "/app/go").Info("Opening dir")
. If you have to log multiple keys, always useWithFields
instead of callingWithField
more than once.
Tracing and Correlation
LabKit is a place to keep common libraries for Go services. Currently it's vendored into two projects: Workhorse and Gitaly, and it exports two main (but related) pieces of functionality:
-
gitlab.com/gitlab-org/labkit/correlation
: for propagating and extracting correlation ids between services. -
gitlab.com/gitlab-org/labkit/tracing
: for instrumenting Go libraries for distributed tracing.
This gives us a thin abstraction over underlying implementations that is
consistent across Workhorse, Gitaly, and, in future, other Go servers. For
example, in the case of gitlab.com/gitlab-org/labkit/tracing
we can switch
from using Opentracing
directly to using Zipkin
or Gokit's own tracing wrapper
without changes to the application code, while still keeping the same
consistent configuration mechanism (i.e. the GITLAB_TRACING
environment
variable).
Context
Since daemons are long-running applications, they should have mechanisms to manage cancellations, and avoid unnecessary resources consumption (which could lead to DDOS vulnerabilities). Go Context should be used in functions that can block and passed as the first parameter.
Dockerfiles
Every project should have a Dockerfile
at the root of their repository, to
build and run the project. Since Go program are static binaries, they should
not require any external dependency, and shells in the final image are useless.
We encourage Multistage
builds:
- They let the user build the project with the right Go version and dependencies.
- They generate a small, self-contained image, derived from
Scratch
.
Generated Docker images should have the program at their Entrypoint
to create
portable commands. That way, anyone can run the image, and without parameters
it displays its help message (if cli
has been used).
Distributing Go binaries
With the exception of GitLab Runner, which publishes its own binaries, our Go binaries are created by projects managed by the Distribution group.
The Omnibus GitLab project creates a single, monolithic operating system package containing all the binaries, while the Cloud-Native GitLab (CNG) project publishes a set of Docker images and Helm charts to glue them together.
Both approaches use the same version of Go for all projects, so it's important to ensure all our Go-using projects have at least one Go version in common in their test matrices. You can check the version of Go currently being used by Omnibus, and the version being used for CNG.
Updating Go version
We should always use a supported version of Go, i.e., one of the three most recent minor releases, and should always use the most recent patch-level for that version, as it may contain security fixes.
Changing the version affects every project being compiled, so it's important to ensure that all projects have been updated to test against the new Go version before changing the package builders to use it. Despite Go's compatibility promise, changes between minor versions can expose bugs or cause problems in our projects.
Once you've picked a new Go version to use, the steps to update Omnibus and CNG are:
-
Create a merge request in the CNG project,
update the
GO_VERSION
inci_files/variables.yml
. -
Create a merge request in the
gitlab-omnibus-builder
project, update theGO_VERSION
indocker/VERSIONS
. - Tag a new release of
gitlab-omnibus-builder
containing the change. -
Create a merge request in the
omnibus-gitlab
project, update theBUILDER_IMAGE_REVISION
to match the newly-created tag.
To reduce unnecessary differences between two distribution methods, Omnibus and CNG should always use the same Go version.
Supporting multiple Go versions
Individual Golang-projects need to support multiple Go versions for the following reasons:
- When a new Go release is out, we should start integrating it into the CI pipelines to verify compatibility with the new compiler.
- We must support the Omnibus official Go version, which may be behind the latest minor release.
- When Omnibus switches Go version, we still may need to support the old one for security backports.
These 3 requirements may easily be satisfied by keeping support for the 3 latest minor versions of Go.
It's ok to drop support for the oldest Go version and support only 2 latest releases, if this is enough to support backports to the last 3 GitLab minor releases.
Example:
In case we want to drop support for go 1.11
in GitLab 12.10
, we need to verify which Go versions we are using in 12.9
, 12.8
, and 12.7
.
We do not consider the active milestone, 12.10
, because a backport for 12.7
is required in case of a critical security release.
- If both Omnibus and CNG were using Go
1.12
in GitLab12.7
and later, then we safely drop support for1.11
. - If Omnibus or CNG were using
1.11
in GitLab12.7
, then we still need to keep support for Go1.11
for easier backporting of security fixes.
Secure Team standards and style guidelines
The following are some style guidelines that are specific to the Secure Team.
Code style and format
Use goimports -local gitlab.com/gitlab-org
before committing.
goimports
is a tool that automatically formats Go source code using
Gofmt
, in addition to formatting import lines,
adding missing ones and removing unreferenced ones.
By using the -local gitlab.com/gitlab-org
option, goimports
groups locally referenced
packages separately from external ones. See
the imports section
of the Code Review Comments page on the Go wiki for more details.
Most editors/IDEs allow you to run commands before/after saving a file, you can set it
up to run goimports -local gitlab.com/gitlab-org
so that it's applied to every file when saving.
Analyzer Tests
The conventional Secure analyzer has a convert
function that converts SAST/DAST scanner reports into GitLab Security Reports. When writing tests for the convert
function, we should make use of test fixtures using a testdata
directory at the root of the analyzer's repository. The testdata
directory should contain two subdirectories: expect
and reports
. The reports
directory should contain sample SAST/DAST scanner reports which are passed into the convert
function during the test setup. The expect
directory should contain the expected GitLab Security Report that the convert
returns. See Secret Detection for an example.
If the scanner report is small, less than 35 lines, then feel free to inline the report rather than use a testdata
directory.