Skip to content

deadmanssnitch/honeybadger

Repository files navigation

Honeybadger Error Notifier

NOTES

Stack Traces

The "Go Way" of error reporting, as best as I can tell, involves building a semi-structured log line from all of the errors in a chain. I often find that hard to work with and would much rather get a stack trace that I can inspect. One goal of my goals was to make it easier to generate and send a stack trace to Honeybadger which is done by the Wrap function. Generally, Wrap should be called as low in the stack as possible to get a meaningful stack trace. It's safe to call Wrap on an already wrapped error.

Context

I wanted a more structured and type safe way of creating the context objects that get sent to Honeybadger. I ended up with a zerolog inspired Context().Type("key", value) of setting values in the context which builds up from the lowest wrapped error until the error is finally sent to Honeybadger. In my opinion, using a map like in the Ruby client felt cumbersome because all the typing of map[string]interface{} and left issues with types.

Also with the context was I wanted to be able to set all of the fields that are being sent with the error. For example, I don't think I could have sent the equivalent of the Rails controller from a handler even though it can be useful.

Notify takes an error

I've changed honeybadger.Notify to take an error instead of an interface{} because I don't know what the use case would be for interface{}. recover() does return an interface{} which we can convert to an error in that one case.

Known Errors

See known.go

Sentinel errors were being reported as errors.errorString which is not something you can even do type checking against. Instead, I've added some basic mappings to common sentinel errors. I don't think it's a really scalable approach but it sure makes io.EOF a lot easier to spot in the error log.

TODO

Middleware

I can think of two kinds of middleware: filtering and logging. Filtering to avoid sending thousands of the same error in quick succession. Logging rather than the NullClient since errors being caught in development but not sent anywhere make debugging more difficult.

Building Repeated Context

Context is added before returning the error and is built up as the call stack unwinds. Setting the Context (e.g. err.Context()) can be very repetitive especially in functions that check for err 2 or more times. I think the right way to do this is only if there is an error. Would be good to come up with a pattern that reduces the repetition.

For example:

errCtx := func(err error) error {
  hb := honeybadger.Wrap(err)
  hb.Context().Str("user_id", user_id)

  return hb
}

err := networkCall(ctx)
if err != nil {
  return errCtx(err)
}

err := databaseCall(ctx, query)
if err != nil {
  return errCtx(err)
}

Setup

func main() {
  // NewConfig will read from HONEYBADGER_API_KEY and HONEYBADGER_ENV
  // environment variables.
	cfg := honeybadger.NewConfig()
	cfg.APIKey = "123456789"
	cfg.Env = "production"

  // Set the global config for ,h
  honeybadger.Configure(cfg)
}

Examples

To send a error to Honeybadger

honeybadger.Notify(err)

To make it easier to diagnose errors and dig into the root cause it's possible to attach a stack trace and other context to the error. To do this you'll use honeybadger.Wrap.

err := someNetworkCall(ctx)

// Wrap generates the stack trace as of the first call to Wrap and makes it
// possible to add new values to the context.
if hbErr := honeybadger.Wrap(err); hbErr != nil {
  hbErr.Context().
    Str("user_id", user_id).
    Int("len", len(queries))

  return hbErr
}
  • Known errors
    • Sometimes they're aliased io.EOF vs os.EOF(?)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages