Closed
Description
Directly exposing a web server written with Go's http package is extremely risky at the moment as you can be easily subject to denial of service attacks. The most obvious attack vectors for a web server are: * Request/header lines being too long * Header size being too large * Request body being too large Right now, only the first of these seems to be handled by the http package. As far as I can figure out, the textproto.Reader.readLineSlice calls used by the http.ReadRequest implicitly returns an error for lines over the length of the underlying bufio.Reader (4096 bytes?). Whilst this seems perfectly adequate for my own needs, it would be nice if this were documented somewhere. However, there seems to be no protection against a large header with a bazillion lines though. A configurable field, e.g. MaxHeaderLines, on the http.Server struct would be very handy for this. This could then be passed along to the textproto.Reader.ReadMIMEHeader call made by http.ReadRequest, and it in turn could return an error if the number of lines exceeds the given length. As for the request.Body, the maximum size of the request body needs to be configurable on a per request basis. Right now, there are implicit limits of 10MB for application/x-www-form-urlencoded forms. It would be nice if this was documented somewhere — sorry if I missed it. But, even the 10MB limit doesn't seem to completely help, because request.Body as set in http.readTransfer is only limited to the Content-Length provided by the request. And in the body.Close call, any remaining content in the request body is copied to ioutil.Discard. So it seems a malicious party could send an extremely large request body and use up the CPU cycles on a server.. ? The maxMemory parameter to http.Request.ParseMultipartForm doesn't seem to really offer much protection. Because, at best, an attacker could just use up all available disk space by sending large request bodies. But, worse, it seems that memory could also be exhausted because the multipart.Part.populateHeaders call makes use of textproto.Reader.ReadMIMEHeader which could be of arbitrary size... It would be nice if a LimitRequestBody function field could be added to the http.Server struct. An ideal signature for it would be: func(req *http.Request, contentLength int64) (limit int64, error []byte) Then, inside http.readTransfer calls, LimitRequestBody could be called with the current request and Content-Length value. If the function returned an error []byte, then a HTTP "400 Bad Request" would be sent with the given error as the body. Else, the returned limit would be used to limit the new &body{} object. And if LimitRequestBody wasn't set, perhaps limit it to some arbitrary size, e.g. 2GB? Having such a configurable function would both provide protection against denial of service attacks as well as allow for limits to be set on a per request basis, e.g. perhaps you want to allow authenticated users to upload 1GB files, but deny all anonymous users from sending any request bodies... In summary, extending http.Server with: MaxHeaderLines int LimitRequestBody func(*http.Request, int64) (int64, []byte) And making the respective changes to http.ReadRequest, http.readTransfer, and textproto.Reader.ReadMIMEHeader should provide significantly better protection against denial of service attacks. And, finally, perhaps the order of these two blocks http.Request.ParseMultipartForm should be reversed? if r.MultipartForm != nil { return nil } if r.MultipartForm == multipartByReader { return os.NewError("http: multipart handled by MultipartReader") } I'm using 2d7eda309c95 tip. -- Thanks for hearing me out, tav