You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Automatically generating models is great for many of our simple cases. However, there are a few cases where we'd like access to the original HTTP response - particularly the headers - without having to augment with another library like requests.
One example is when paginating results. We want to retrieve a list of models, but there's also a HTTP header like X-Result-Count that tells us how many total results are available.
Describe the solution you'd like
It would be great to have a Response wrapper object that would minimally preserve the HTTP headers and still return the deserialized models.
For compatibility reasons this could be an option like --wrap-responses which is False by default, but when True would update all the method signatures to use something like Response[Model] or Response[List[Model]], etc.
Describe alternatives you've considered
The current alternatives are either lose/ignore the functionality afforded from the headers or to manually make calls via another library (like requests) and manually parse the response ourselves.
The text was updated successfully, but these errors were encountered:
Would you usually want one or the other in the entire client? I feel like it would be more convenient to provide both options in every client using either:
Different functions (my_endpoint vs my_endpoint_detailed or something?).
An option to the function (my_endpoint(wrap_response=True)) which could work with typing using the @overload decorator.
My personal take is that an all-or-nothing approach would be sufficient. Rationale being, if there are cases where Response is needed, I'd prefer the signatures to be consistent instead of sometimes needing to call response.body() and other times getting the model directly.
The first approach (my_endpoint vs my_endpoint_detailed) is nice to provide breadth of options but I worry about growing the size of the generated code, because if we do it for both the synchronous and async client, presumably now we have four endpoint representations for a single endpoint.
I don't have experience with @overload, but I think at least for our case we'd probably always be passing wrap_response=True for the consistency.
The new _detailed function variants in the 0.6.0 alphas allow access to headers, status_code, and the raw content (in bytes). I believe this covers everything requested here so I'm going to close this one to mark progress. Should be easy enough to add more attributes to Response in the future if need be.
Is your feature request related to a problem? Please describe.
Automatically generating models is great for many of our simple cases. However, there are a few cases where we'd like access to the original HTTP response - particularly the headers - without having to augment with another library like
requests
.One example is when paginating results. We want to retrieve a list of models, but there's also a HTTP header like
X-Result-Count
that tells us how many total results are available.Describe the solution you'd like
It would be great to have a
Response
wrapper object that would minimally preserve the HTTP headers and still return the deserialized models.Sample might look like:
For compatibility reasons this could be an option like
--wrap-responses
which isFalse
by default, but whenTrue
would update all the method signatures to use something likeResponse[Model]
orResponse[List[Model]]
, etc.Coming from the Java world, I've seen and used something like this in the past: https://square.github.io/retrofit/2.x/retrofit/retrofit2/Response.html
Describe alternatives you've considered
The current alternatives are either lose/ignore the functionality afforded from the headers or to manually make calls via another library (like
requests
) and manually parse the response ourselves.The text was updated successfully, but these errors were encountered: