Random thoughts

Go as a language interests me a lot and interfaces are one big part of it. And when I say interfaces, I am talking about the design - satisfied implicitly rather than explicitly -, extremely tied to behaviour. If you have ever spent some time writing Go code, one thing you would have noticed is the extensive presence of well defined and small (one method) interfaces. The universal io.Reader and io.Writer or the little known http.RoundTripper (I did a blog post on http.RoundTripper), ….

This is nothing new. It is essentially the I in well known SOLID principles by Uncle Bob that says Client should not be forced to depend on methods they do not use. And this to some degree I feel adheres to the L - SOLID - since the interface is extremely small, it would be easy to write another implementation that can be substituted without client code figuring out the difference. The more “go to definition” takes me to the standard library I have come to appreciate the composition of interfaces.

I like testing and while writing tests for my Go code, there is a certain thing that freaks the living hell out of me. Mocks. Having to run an external tool such as mockery in other to generate mocks weirds me out. Not to also mention the fact that the generated mocks are usually verbose and reduce code coverage 0. With that said, sometimes I try hand written mocks and this is especially common when I am trying to simulate the failure of io.ReadAll() or io.Read([]buf), just a mock data structure with a Read that has -1, err as it’s return value.

But in recent times, I have been evaluating code I have written in the past and looking at the absurd amount of flux that goes into this mocks (autogenerated ones and handwritten) with a bunch of methods that didn’t get used in the test but needed to be present so as to fulfill a contract, I just shake my head. After watching spf13’s 7 common mistakes in Go talk, I found a pattern that could be applied to the situation described above. In his talk, he specifically mentioned writing functions such that whenever they have to take an interface as a parameter, it has to be the minimum or smallest contract needed for the function to perform it’s operation. In his slides, he had something like :

You can think of this as the Interface segregation principle but for functions. Whatever that means.

So what does this look like in real life ? Luckily he gave an example

// Where File is an interface composed of the following interfaces
// io.Reader, io.ReaderAt, io.Seeker, io.SeekerAt
// io.Writer, io.Closer, io.WriterAt
func ReadIn(f File) {
	 b := []byte{}
	 n, err := f.Read(b)
}

He had that converted to

// I am guessing by Reader, he meant io.Reader
func ReadIn(r Reader) {
	 b := []byte{}
	 n, err := r.Read(b)
}

This looks like something that is obvious enough but it just isn’t.

So of what use is this ? This has the benefit of making it extremely easy to read/understand code (without having to filter through that large File interface). It also simplifies testing. For something like the ReadIn function, if we wanted to simulate the failure of Read(), we wouldn’t have to autogenerate a full mock implementation of File which could easily be >= 100LOC (or 30LOC if handwritten) when all we need is the Read method. All that would be needed is

type mockReader struct {}

func (m mockReader) Read(buf []byte) (n int, err error) {
	return -1, errors.New("whoops")
}

funcTestReadIn(t *testing.T) {
	r := mockReader{}
	ReadIn(r)
	// Maybe ReadIn in real code might return an error which assertions can run against
}

This might be a little hard to impose in some certain situations but with this technique, I think I can get rid of my issues with mocks in Go.

:satisfied:

Footnotes

[0] I understand code coverage should not be substituted for software quality but I feel it can be a pointer.