Creating a database connection pool
// The sql.Open() function initializes a new sql.DB object, which is essentially a // pool of database connections. db, err := sql.Open("mysql", "web:pass@/snippetbox?parseTime=true") if err != nil { ... }
There are a few things about this code to explain and emphasize:
- The first parameter to sql.Open() is the driver name and the second parameter is the data source name (sometimes also called a connection string or DSN) which describes how to connect to your database.
- The format of the data source name will depend on which database and driver you’re using. Typically, you can find information and examples in the documentation for your specific driver. For the driver we’re using you can find that documentation here.
- The parseTime=true part of the DSN above is a driver-specific parameter which instructs our driver to convert SQL TIME and DATE fields to Go time.Time objects.
- The sql.Open() function returns a sql.DB object. This isn’t a database connection — it’s a pool of many connections. This is an important difference to understand. Go manages the connections in this pool as needed, automatically opening and closing connections to the database via the driver.
- The connection pool is safe for concurrent access, so you can use it from web application handlers safely.
- The connection pool is intended to be long-lived. In a web application it’s normal to initialize the connection pool in your main() function and then pass the pool to your handlers. You shouldn’t call sql.Open() in a short-lived HTTP handler itself — it would be a waste of memory and network resources.
Designing a database model
File: internal/models/snippet.go
package models import ( "database/sql" "time" ) // Snippet holds the data for an individual snippet. Notice how // the fields of the struct correspond to the fields in our MySQL // snippet table. type Snippet struct { ID int Title string Content string Created time.Time Expires time.Time } // SnippetModel wraps a sql.DB connection pool. type SnippetModel struct { DB *sql.DB } // Insert inserts a new snippet into the database. func (m *SnippetModel) Insert(title string, content string, expires int) (int, error) { return 0, nil } // Get returns a specific snippet based on its id. func (m *SnippetModel) Get(id int) (Snippet, error) { return Snippet{}, nil } // Latest returns 10 most recently created snippets. func (m *SnippetModel) Latest() ([]Snippet, error) { return nil, nil }
Using the SnippetModel
package main import ( "database/sql" "flag" "log/slog" "net/http" "os" "snippetbox/internal/models" _ "github.com/go-sql-driver/mysql" ) // application struct holds the application-wide dependencies for the web application. type application struct { logger *slog.Logger snippet *models.SnippetModel } func main() { addr := flag.String("addr", ":4000", "HTTP network address") dbDriver := flag.String("dbdriver", "mysql", "Database driver name") dsn := flag.String("dsn", "zeb:zebpwd@tcp(localhost:3306)/snippetbox?parseTime=true", "MySQL data source name") flag.Parse() logger := slog.New(slog.NewTextHandler(os.Stdout, nil)) db, err := openDB(*dbDriver, *dsn) if err != nil { logger.Error(err.Error()) os.Exit(1) } // We defer a call to db.Close(), so that the connection pool is // closed before the main() function exits. defer db.Close() app := &application{ logger: logger, snippet: &models.SnippetModel{DB: db}, } logger.Info("starting server", "addr", *addr) err = http.ListenAndServe(*addr, app.routes()) logger.Error(err.Error()) os.Exit(1) } func openDB(driverName string, dsn string) (*sql.DB, error) { db, err := sql.Open(driverName, dsn) if err != nil { return nil, err } err = db.Ping() if err != nil { db.Close() return nil, err } return db, nil }
Benefits of this structure
If you take a step back, you might be able to see a few benefits of setting up our project in this way:
- There’s a clean separation of concerns. Our database logic won’t be tied to our handlers, which means that handler responsibilities are limited to HTTP stuff (i.e. validating requests and writing responses). This will make it easier to write tight, focused, unit tests in the future.
- By creating a custom SnippetModel type and implementing methods on it we’ve been able to make our model a single, neatly encapsulated object, which we can easily initialize and then pass to our handlers as a dependency. Again, this makes for easier to maintain, testable code.
- Because the model actions are defined as methods on an object — in our case SnippetModel — there’s the opportunity to create an interface and mock it for unit testing purposes.
- And finally, we have total control over which database is used at runtime, just by using the -dsn command-line flag.
Executing the query
Go provides three different methods for executing database queries:
- DB.Query() is used for SELECT queries which return multiple rows.
- DB.QueryRow() is used for SELECT queries which return a single row.
- DB.Exec() is used for statements which don’t return rows (like INSERT and DELETE ).
package models import ( "database/sql" "time" ) // Snippet holds the data for an individual snippet. Notice how // the fields of the struct correspond to the fields in our MySQL // snippet table. type Snippet struct { ID int Title string Content string Created time.Time Expires time.Time } // SnippetModel wraps a sql.DB connection pool. type SnippetModel struct { DB *sql.DB } // Insert inserts a new snippet into the database. func (m *SnippetModel) Insert(title string, content string, expires int) (int, error) { // Write the SQL statement we want to execute. I've split it over two lines // for readability (which is why it's surrounded with backquotes instead // of normal double quotes). stmt := `INSERT INTO snippet (title, content, created, expires) VALUES(?, ?, UTC_TIMESTAMP(), DATE_ADD(UTC_TIMESTAMP(), INTERVAL ? DAY))` result, err := m.DB.Exec(stmt, title, content, expires) if err != nil { return 0, err } // Use the LastInsertId() method on the result to get the ID of our // newly inserted record in the snippet table. id, err := result.LastInsertId() if err != nil { return 0, err } // The ID returned has the type int64, so we convert it to an int type before returning. return int(id), nil } // Get returns a specific snippet based on its id. func (m *SnippetModel) Get(id int) (Snippet, error) { return Snippet{}, nil } // Latest returns 10 most recently created snippets. func (m *SnippetModel) Latest() ([]Snippet, error) { return nil, nil }
Let’s quickly discuss the sql.Result type returned by DB.Exec() . This provides two methods:
- LastInsertId() — which returns the integer (an int64 ) generated by the database in response to a command. Typically this will be from an “auto increment” column when inserting a new row, which is exactly what’s happening in our case.
- RowsAffected() — which returns the number of rows (as an int64 ) affected by the statement.
Important: Not all drivers and databases support the LastInsertId() and RowsAffected() methods. For example, LastInsertId() is not supported by PostgreSQL. So if you’re planning on using these methods it’s important to check the documentation for your particular driver first.
Placeholder parameters
In the code above we constructed our SQL statement using placeholder parameters, where ? acted as a placeholder for the data we want to insert.
The reason for using placeholder parameters to construct our query (rather than string interpolation) is to help avoid SQL injection attacks from any untrusted user-provided input.
Behind the scenes, the DB.Exec() method works in three steps:
1. It creates a new prepared statement on the database using the provided SQL statement. The database parses and compiles the statement, then stores it ready for execution.
2. In a second separate step, DB.Exec() passes the parameter values to the database. The database then executes the prepared statement using these parameters. Because the parameters are transmitted later, after the statement has been compiled, the database treats them as pure data. They can’t change the intent of the statement. So long as the original statement is not derived from untrusted data, injection cannot occur.
3. It then closes (or deallocates) the prepared statement on the database.
The placeholder parameter syntax differs depending on your database. MySQL, SQL Server and SQLite use the ? notation, but PostgreSQL uses the $N notation. For example, if you were using PostgreSQL instead you would write:
_, err := m.DB.Exec("INSERT INTO ... VALUES ($1, $2, $3)", ...)
Using the model in our handlers
func (app *application) snippetCreatePost(w http.ResponseWriter, r *http.Request) { title := "0 snail" content := "O snail\nClimb Mount Fuji,\nBut slowly, slowly!\n\n- Kobayashi Issa" expires := 7 id, err := app.snippet.Insert(title, content, expires) if err != nil { app.serverError(w, r, err) return } // Redirect the user to the relevant page for the snippet. http.Redirect(w, r, fmt.Sprintf("/snippet/view/%d", id), http.StatusSeeOther) }
Start up the application, then open a second terminal window and use curl to make a POST /snippet/create request, like so (note that the -L flag instructs curl to automatically follow redirects):
zzh@ZZHPC:~$ curl -iL -d "" http://localhost:4000/snippet/create HTTP/1.1 303 See Other Location: /snippet/view/4 Date: Tue, 03 Sep 2024 08:29:57 GMT Content-Length: 0 HTTP/1.1 200 OK Date: Tue, 03 Sep 2024 08:29:57 GMT Content-Length: 39 Content-Type: text/plain; charset=utf-8 Display a specific snippet with ID 4..
Single-record SQL queries
The pattern for executing a SELECT statement to retrieve a single record from the database is a little more complicated.
// Get returns a specific snippet based on its id. func (m *SnippetModel) Get(id int) (Snippet, error) { stmt := `SELECT id, title, content, created, expires FROM snippet WHERE expires > UTC_TIMESTAMP() AND id = ?` row := m.DB.QueryRow(stmt, id) // Initialize a new zeroed Snippet struct. var s Snippet // Use row.Scan() to copy the values from each field in sql.Row to the // corresponding field in the Snippet struct. Notic that the arguments // to row.Scan are *pointers* to the place you want to copy the data into, // and the number of arguments must be exactly the same as the number of // columns returned by your statement. err := row.Scan(&s.ID, &s.Title, &s.Content, &s.Created, &s.Expires) if err != nil { // If the query returns no rows, then row.Scan() will return a // sql.ErrNoRows error. We use the errors.Is() function check for // that error specifically, and return our own ErrNoRecord error // instead. if errors.Is(err, sql.ErrNoRows) { return Snippet{}, ErrNoRecord } else { return Snippet{}, err } } return s, nil }
File: internal/models/errors.go
package models import "errors" var ErrNoRecord = errors.New("models: no matching record found")
As an aside, you might be wondering why we’re returning the ErrNoRecord error from our SnippetModel.Get() method, instead of sql.ErrNoRows directly. The reason is to help encapsulate the model completely, so that our handlers aren’t concerned with the underlying datastore or reliant on datastore-specific errors (like sql.ErrNoRows ) for its behavior.
Behind the scenes of rows.Scan() your driver will automatically convert the raw output from the SQL database to the required native Go types. So long as you’re sensible with the types that you’re mapping between SQL and Go, these conversions should generally Just Work. Usually:
- CHAR , VARCHAR and TEXT map to string .
- BOOLEAN maps to bool .INT maps to int ; BIGINT maps to int64 .
- DECIMAL and NUMERIC map to float .
- TIME , DATE and TIMESTAMP map to time.Time .
Note: A quirk of our MySQL driver is that we need to use the parseTime=true parameter in our DSN to force it to convert TIME and DATE fields to time.Time . Otherwise it returns these as []byte objects. This is one of the many driver-specific parameters that it offers.
Using the model in our handlers
func (app *application) snippetView(w http.ResponseWriter, r *http.Request) { id, err := strconv.Atoi(r.PathValue("id")) if err != nil || id < 1 { http.NotFound(w, r) return } // Use the SnippetModel's Get() method to retrieve the data for a // specific record based on its ID. If no matching record is found, // return a 404 Not Found response. snippet, err := app.snippet.Get(id) if err != nil { if errors.Is(err, models.ErrNoRecord) { http.NotFound(w, r) } else { app.serverError(w, r, err) } return } // Write the snippet data as a plain-text HTTP response body. fmt.Fprintf(w, "%+v", snippet) }
Multiple-record SQL queries
// Latest returns 10 most recently created snippets. func (m *SnippetModel) Latest() ([]Snippet, error) { stmt := `SELECT id, title, content, created, expires FROM snippet WHERE expires > UTC_TIMESTAMP() ORDER BY id DESC LIMIT 10` rows, err := m.DB.Query(stmt) if err != nil { return nil, err } // We defer rows.Close() to ensure the sql.Rows resultset is always // properly closed before the Latest() method returns. This defer // statement should come *after* you check for an error from the // Query() method. Otherwise, if Query() returns an error, you'll // get a panic trying to close a nil resultset. defer rows.Close() var snippets []Snippet // Use rows.Next to iterate through the rows in the resultset. This // prepares the first (and then each subsequent) row to be acted on // by the rows.Scan() method. If iteration over all the rows completes // then the resultset automatically closes itself and frees-up the // uderlying database connection. for rows.Next() { var s Snippet err = rows.Scan(&s.ID, &s.Title, &s.Content, &s.Created, &s.Expires) if err != nil { return nil, err } snippets = append(snippets, s) } // When the rows.Next() loop has finished we call rows.Err() to retrieve any // error that was encountered during the iteration. It's important to call // this - don't assume that a successful iteration was completed over the // whole resultset. if err = rows.Err(); err != nil { return nil, err } return snippets, nil }
Important: Closing a resultset with defer rows.Close() is critical in the code above. As long as a resultset is open it will keep the underlying database connection open… so if something goes wrong in this method and the resultset isn’t closed, it can rapidly lead to all the connections in your pool being used up.
The database/sql package
As you’re probably starting to realize, the database/sql package essentially provides a standard interface between your Go application and the world of SQL databases.
So long as you use the database/sql package, the Go code you write will generally be portable and will work with any kind of SQL database — whether it’s MySQL, PostgreSQL, SQLite or something else. This means that your application isn’t so tightly coupled to the database that you’re currently using, and the theory is that you can swap databases in the future without re-writing all of your code (driver-specific quirks and SQL implementations aside).
It’s important to note that while database/sql generally does a good job of providing a standard interface for working with SQL databases, there are some idiosyncrasies in the way that different drivers and databases operate. It’s always a good idea to read over the documentation for a new driver to understand any quirks and edge cases before you begin using it.
Managing null values
One thing that Go doesn’t do very well is managing NULL values in database records.
Let’s pretend that the title column in our snippets table contains a NULL value in a particular row. If we queried that row, then rows.Scan() would return the following error because it can’t convert NULL into a string:
sql: Scan error on column index 1: unsupported Scan, storing driver.Value type <nil> into type *string
Very roughly, the fix for this is to change the field that you’re scanning into from a string to a sql.NullString type.
But, as a rule, the easiest thing to do is simply avoid NULL values altogether. Set NOT NULL constraints on all your database columns, like we have done in this book, along with sensible DEFAULT values as necessary.
Working with transactions
It’s important to realize that calls to Exec() , Query() and QueryRow() can use any connection from the sql.DB pool. Even if you have two calls to Exec() immediately next to each other in your code, there is no guarantee that they will use the same database connection.
Sometimes this isn’t acceptable. For instance, if you lock a table with MySQL’s LOCK TABLES command you must call UNLOCK TABLES on exactly the same connection to avoid a deadlock.
To guarantee that the same connection is used you can wrap multiple statements in a transaction. Here’s the basic pattern:
type ExampleModel struct { DB *sql.DB } func (m *ExampleModel) ExampleTransaction() error { // Calling the Begin() method on the connection pool creates a new sql.Tx // object, which represents the in-progress database transaction. tx, err := m.DB.Begin() if err != nil { return err } // Defer a call to tx.Rollback() to ensure it is always called before the // function returns. If the transaction succeeds it will be already be // committed by the time tx.Rollback() is called, making tx.Rollback() a // no-op. Otherwise, in the event of an error, tx.Rollback() will rollback // the changes before the function returns. defer tx.Rollback() // Call Exec() on the transaction, passing in your statement and any // parameters. It's important to notice that tx.Exec() is called on the // transaction object just created, NOT the connection pool. Although we're // using tx.Exec() here you can also use tx.Query() and tx.QueryRow() in // exactly the same way. _, err = tx.Exec("INSERT INTO ...") if err != nil { return err } // Carry out another transaction in exactly the same way. _, err = tx.Exec("UPDATE ...") if err != nil { return err } // If there are no errors, the statements in the transaction can be committed // to the database with the tx.Commit() method. err = tx.Commit() return err }
Important: You must always call either Rollback() or Commit() before your function returns. If you don’t the connection will stay open and not be returned to the connection pool. This can lead to hitting your maximum connection limit/running out of resources. The simplest way to avoid this is to use defer tx.Rollback() like we are in the example above.
Transactions are also super-useful if you want to execute multiple SQL statements as a single atomic action. So long as you use the tx.Rollback() method in the event of any errors, the transaction ensures that either:
- All statements are executed successfully; or
- No statements are executed and the database remains unchanged.
Prepared statements
As I mentioned earlier, the Exec() , Query() and QueryRow() methods all use prepared statements behind the scenes to help prevent SQL injection attacks. They set up a prepared statement on the database connection, run it with the parameters provided, and then close the prepared statement.
This might feel rather inefficient because we are creating and recreating the same prepared statements every single time.
In theory, a better approach could be to make use of the DB.Prepare() method to create our own prepared statement once, and reuse that instead. This is particularly true for complex SQL statements (e.g. those which have multiple JOINS) and are repeated very often (e.g. a bulk insert of tens of thousands of records). In these instances, the cost of re-preparing statements may have a noticeable effect on run time.
Here’s the basic pattern for using your own prepared statement in a web application:
// We need somewhere to store the prepared statement for the lifetime of our // web application. A neat way is to embed it in the model alongside the // connection pool. type ExampleModel struct { DB *sql.DB InsertStmt *sql.Stmt } // Create a constructor for the model, in which we set up the prepared // statement. func NewExampleModel(db *sql.DB) (*ExampleModel, error) { // Use the Prepare method to create a new prepared statement for the // current connection pool. This returns a sql.Stmt object which represents // the prepared statement. insertStmt, err := db.Prepare("INSERT INTO ...") if err != nil { return nil, err } // Store it in our ExampleModel struct, alongside the connection pool. return &ExampleModel{DB: db, InsertStmt: insertStmt}, nil } // Any methods implemented against the ExampleModel struct will have access to // the prepared statement. func (m *ExampleModel) Insert(args ...) error { // We then need to call Exec directly against the prepared statement, rather // than against the connection pool. Prepared statements also support the // Query and QueryRow methods. _, err := m.InsertStmt.Exec(args ...) return err } // In the web application's main function we will need to initialize a new // ExampleModel struct using the constructor function. func main() { db, err := sql.Open( ... ) if err != nil { logger.Error(err.Error()) os.Exit(1) } defer db.Close() // Use the constructor function to create a new ExampleModel struct. exampleModel, err := NewExampleModel(db) if err != nil { logger.Error(err.Error()) os.Exit(1) } // Defer a call to Close() on the prepared statement to ensure that it is // properly closed before our main function terminates. defer exampleModel.InsertStmt.Close() }
There are a few things to be wary of though.
Prepared statements exist on database connections. So, because Go uses a pool of many database connections, what actually happens is that the first time a prepared statement (i.e. the sql.Stmt object) is used it gets created on a particular database connection. The sql.Stmt object then remembers which connection in the pool was used. The next time, the sql.Stmt object will attempt to use the same database connection again. If that connection is closed or in use (i.e. not idle) the statement will be re-prepared on another connection.
Under heavy load, it’s possible that a large number of prepared statements will be created on multiple connections. This can lead to statements being prepared and re-prepared more often than you would expect — or even running into server-side limits on the number of statements (in MySQL the default maximum is 16,382 prepared statements).
The code is also more complicated than not using prepared statements.
So, there is a trade-off to be made between performance and complexity. As with anything, you should measure the actual performance benefit of implementing your own prepared statements to determine if it’s worth doing. For most cases, I would suggest that using the regular Query() , QueryRow() and Exec() methods — without preparing statements yourself — is a reasonable starting point.