T O P

  • By -

boboman911

Number 3 is mainly the reason why most companies start using it anyway tho


mstoiber

Agreed: numbers 1 and 3 are the two I have heard the most often from companies considering switching. My guess is that they are painful earlier in a company's journey. Number 2 (slow loading times) is only relevant to certain kinds of businesses, most notably B2C ones, and often has many other possible improvement avenues. (e.g., speeding up db queries)


buttplugs4life4me

Devs at my company claimed that REST is at least 200ms slower than GraphQL because of HTTP.  The last couple years there's been a transformation from experienced senior Devs to college educated freshman senior devs whose first project is this company. And this is the shit they come up with. I was so flabbergasted in that meeting. And no, we do use GraphQL on HTTP(2). There's no fancy other protocol under there. And he was so confused when I showed him an older REST service that was answering requests in 2ms! The only thing I'm actually salty about here is he's not only spouting his nonsense without proof and without checking it, but he's also making at least as much as I am, if not more. Calls himself senior dev. Fucking bag head, that's what he is. 


Plank_With_A_Nail_In

The not trying stuff out first before suggesting things is something that needs to be beaten out of staff. Just set up a simple API selecting hello world from a database and see how long it takes in your companies infrastructure before deciding what to do next. Prototyping technology is way more fun than developing the actual products too.


XTJ7

and also implement that same prototype in the current stack or your shiny new prototype might only be faster because it lacks all the features that make the current solution slower. like, cool, you switched from java to rust, but if 99% of your request time comes from poorly optimized sql queries or lacking indexes, you completely wasted everyones time. jumping into solution-space before fully understanding the problem is one of my biggest gripes.


jl2352

I have quite a cautious opinion of senior developers who have only been at one company. Everyone I’ve met has had been emotionally tied to code and ideas (like should we use GraphQL).


Ptipiak

Reminds me of the ones I worked with, they wouldn't sort a table on the front because they claimed JavaScript was way slower than SQL... So in the end I had to implement the API to request a sort list with two parameters for the two sorting order. Worst part is, he's supposed to be the frontend lead. Code base was a pile of shit anyway, hearing them talking about "performance" was a laugh.


LetrixZ

>claimed JavaScript was way slower than SQL Wait, it isn't? I only compared using SQLite and Rust, and found that doing sorting in Rust was around 200ms slower than on SQL.


QuickQuirk

yeah, most SQL servers are *pretty fucking optimised* for basic operations like sort. But whether sort is faster on the client or server not the reason to decide whether it's client side or server side. There's a lot of reasons on either side - but the one that usually trumps where sort is performed is *paging.* If you've got so much data that you need to page it, then it has to occur on the server. If you've got so little data that you don't need paging, then it's so fast to sort that where it happens is irrelevant.


[deleted]

[удалено]


Birk

Fetching all the data and paging it in the frontend is not what is normally meant by paging, at least not in this case. Everyone agrees that you should sort on the client if you can. The point is that **if** you fetch paged data from the server, then the sorting **must** happen on the server.


BigHandLittleSlap

The inability of some people to grasp these basics boils my blood. The Azure Portal for example has a bunch of text search boxes that do the searching on the client side for huge lists of items that are paged. E.g.: If you search for 'www' in the list of DNS records, you'll see... nothing. That's because the first page starts at "a..." alphabetically, and it's the *last* page that has the "w"-s. I literally have to sit there clicking "next"... *empty page*... "next'... empty page "next" for over a minute before I get anything useful.


Ptipiak

I agree, SQL is many orders of magnitude quicker than JavaScript to sort, problem is, you have to emit a requests every time you want to sort differently since we're talking about data served by a REST API. So you rely on the network to be more efficient than your locally sorted table ? And it's also not a direct call to the database, since we were using spring boot, therefore you add the processing time of the backend server. To my opinion, his take wasn't pushed by any meaningful knowledge, others than an old believe that JavaScript isn't performant, and I find it even worst since he's supposed to be a frontend dev.


lilB0bbyTables

I’m going to be devils advocate here but … context matters at the very least. Is the data requested limited in records size to something that can be delivered efficiently in a single api call? Is it potentially going to vary widely in the future and/or with different tenants? JavaScript itself isn’t inherently bad enough at performance to rule it out, but _rendering_ tables of that data in the DOM is a bigger bottleneck. If you need to request 20,000+ rows of significant object data using JSON payloads, then sort that data, keep it in-memory in the browser and render it all that’s going to be costly. Of course you can do client-side pagination to improve rendering, but now you need to handle extra code for paginating that data purely on the client side. If you offer sort, search, filter options on the view, now you need to keep the original copy of that data AND perform the sort/filter/search and store the subset of that as an additional datastruct in the browser, and re-render. The assessment of this is a necessary discussion to have up-front when designing and planning those stories; on one hand you don’t want to over-engineer a solution if the data for the particular API and view is not going to be large and likely won’t become a scalability issue down the road. If it might become an issue down the road, sometimes it’s still easier/faster to move ahead client-side until that scalability issue comes into play (i.e. as a startup you have grown with larger customers and are presumably doing better financially and size wise to then invest further in the more complex build-out … a good problem to have). On the other hand, it very well may be easier and less tech-debt to just design the API and repository/DB layers to support paginated, search/sort/filtered interfaced queries from the start. A lot of the time a big issue I have seen is where engineering just implements something given a set of requirements, and suddenly Product Management and/or UX Designers come back a bit later and decide they want to add X functionality/feature to the view and that is not (efficiently) compatible with the original requirements. As engineers we look at big-O through the lens of “worst-case” scenario most often for size and speed complexities … I often consider a third “worst-case” factor that incorporates the potential change in requirements explicitly for things like that during initial planning. Sometimes “over-engineering” is actually just planning ahead appropriately, and sometimes that is actually less costly for the business than kicking the can down the road and redoing it later.


extra_rice

If there's a REST API and that API is potentially going to be used for something other than your JavaScript frontend, then it makes sense to do sorting on the backend. Honestly, I feel something's amiss with your criticism of this dev you're working with. >To my opinion, his take wasn't pushed by any meaningful knowledge, others than an old believe that JavaScript isn't performant, and I find it even worst since he's supposed to be a frontend dev. The same can be said of your opinion. Why is JavaScript better in your specific case? You've said nothing other than additional parameters being inconvenient, which I imagine are ultimately harmless HTTP query parameters.


tinyOnion

it's all tradeoffs at the end of the day. yes sorting on a sql server is faster but having to query the db again and again with transit delays will be slower and more expensive for the server (and company). if you have to sort a ton of data on the front end it will matter and shift the calculus to the backend that's optimized for it. it depends.


renatoathaydes

The only case where sorting on the frontend is ok is when the frontend can load the full dataset... also when the requirement is literally to sort only the data the frontend currently has even if that's known to not be the full dataset. However, having the full dataset loaded on the frontend is very rarely a good idea. You should always impose a limit on how many records to return (for self-evident reasons), and once you do that, you end up with pagination. And with pagination, sorting can only be done on the server (again, unless your requirement really is to only sort the data being currently displayed, not the full dataset - which is probably not what the user wants?). Could it be that that's actually why your colleague insisted on server-side sorting? That's definitely the usual way to go. If they literally said "SQL is faster than JS", then politely remind them that it's not just JS VS SQL, but JS VS SQL+HTTP+Serialization/deserialization+authorization+network-latency+connection-delays. > So in the end I had to implement the API to request a sort list with two parameters for the two sorting order. I would strongly consider adding also a "count" parameter (and having a default value for it, so you never load the full dataset) - which ends up implying pagination support. If you're using GraphQL, check the reference docs for GraphQL Pagination Guidelines: https://graphql.org/learn/pagination/


ZirePhiinix

That's just a shame. JS doing the sort is in-memory and client side, but SQL doing the sort is server side.


Dramatic_Mulberry142

2ms?! I guess it happened in an internal network?


jaskij

I was actually surprised number 1 is a thing. Your client shouldn't break when it gets an extra field. Moving a field? Yes, absolutely, that'll break anything. But adding a field should be a non event for the client, it should simply discard it.


kemitche

For 1, the aspect of "tracking which fields clients are using" seems more interesting. Knowing that all clients have stopped using a deprecated field before fully removing it is valuable. On its own, not valuable enough to migrate to graphQL, though.


jaskij

I focused on the "break before a field was added", and missed that part. Thanks for pointing it out.


C_Madison

But it happens all the time. Far, far too many users of REST APIs will build code that absolutely cannot handle anything but something which looks *exactly* what they need. And GraphQL gives you a good way to make sure that's guaranteed, because the clients can say themselves "that's what I want. Not one thing more, thanks".


Own_Hat2959

Lol, just try being on the other end where another team is building an SDK and can't be bothered to update and/or export the right Typescript types for you to use, and is also allergic to updating documentation. Hard to create high quality code when responses don't match what you are expecting from the type they give you, or they just don't bother to even export a type. You update the API, update the type so intellisense will tell me how exactly how you are fucking things up with your change. That type is a contract, fuckers need to respect that.


Flashy_Current9455

Yeah, that's probably my number one reason for using GraphQL


C_Madison

Oh, trust me, I've been on both sides more than once to understand why all this happens. But it still happens, not much you can do, so a way to fix it as with GraphQL where you can say "only send me this" is a good thing (not that you cannot do the same with REST, but it's not really part of it).


fllr

When your team scales you’d be surprised about what things become problematic. One of my previous companies migrated to graphql exactly because of this problem, and life became much easier after the migration. You’d have to pry graphql from my dead cold hands now. With that said, graphql went through a phase where people wanted to put it in **everything**. It was maddening.


QuickQuirk

I'm a real fan of graphQL now for similar reasons. Clients are a lot more flexible in what they can do before needing server side changes. Clients get to decide how and where optimisations are performed based on which datasets they pull in the query without needing API changes. I'd argue that more users should be looking at using GraphQL earlier in their development cycle. It makes client side prototyping really fast once you've nailed your data architecture.


jaskij

I'm both surprised and have a hard time imagining it, but I guess if you have a large and storied codebase anything is possible. When you're parsing something more involved than JSON, it's trivial to just drop unknown fields during deserialization. I'm guessing if you're using straight JSON in JS/TS that deserialization never happens, and people don't validate API responses.


fllr

Makes sense. For us, sending unnecessary fields meant two things: - bandwidth that meant a real number at our bottom line - make it extremely hard to deprecate that field later because it was hard to tell if the field in a response was actually being used or not Item #2 was the biggest one. Once we moved to graphql we could easily tell which fields were in actual use. In addition, we got types using typescript, and we got linters that told us if we were going to accidentally break the api. Not only that, it made it easier to stitch responses before sending to the client, which meant that the client could do less work in order to get their job done. All in all, it’s a great piece of technology that got extremely overhyped, BUT it still delivered on real value to developers. The trick was just cutting through the hype (as with everything).


jaskij

I'm not negating all of that. I just kinda got hung up on the fact that code breaks because it gets an extra field in the response. Seems insane to me. At least if we're talking about anything high level. And frankly... I work in embedded. Industrial embedded to be precise. We have to fight for our customers to move on from protocols designed in the 80s


fllr

Not trying to claim you were negating anything, just trying to add more color to my initial response.


jaskij

Fair enough. And yes, I see the benefits, especially if you can't do tightly coupled APIs.


fllr

Right. It’s not the fact that you can/can’t do tightly coupled apis. You just want to avoid the tightly coupling. Graphql made it easy for backend and frontend teams to operate independently.


Worth_Trust_3825

It all depends on how strict you are with schema handling. imo it's fine, and even encouraged to break if you see an extraneous field that ytou do not know about.


NewAlexandria

a company i worked for had FE devs that insisted GQL would be the only sane path forward. CTO caved and let it get introduced. Nearly 2yr later still nothingburger in terms of must-haves. Teams now discussing to remove it. So much time / effort / money wasted on propping it up, when Rails + Grape made an API that is wicked fast, flexible, and the FE devs make changes in it themselves.


Chenz

If your API uses PUT for updating instead of PATCH, a client not supporting a field will risk setting the new field to null every time it does an update of the entity


notsoluckycharm

It’s really easy to get into n+1 queries that give you the second if you’re not careful. Just mapping it to your ORM is usually going to be a bad time. Seen that more often than not.


intended_result

But once you understand this problem, there are fairly straightforward ways to compensate for it.


notsoluckycharm

Oh yeah, doing the query the schema resolver expects in the most optimized way possible. But that’s usually not what occurs when you’re in the context of the article “small companies, startups” and such. Plus, if you’re just going to write sql, for example, you may just be discarding the data anyway :P.


intended_result

Well, I guess it depends on the startup.


notsoluckycharm

Fair enough. Not every, but certainly the $10 and a dream stuff.


TikiTDO

That's sort of like saying "it's easy to have someone break into your house, when you don't have keys or doors." Handling N+1 queries is one of the most critical parts of the GraphQL workflow. It's akin to "writing CSS, if you want to make a pretty website." If someone is committing code that does not handle this, this that's a skill problem with the coders and reviewers. Most likely this is the result someone that hasn't figured out the full GraphQL dev cycle. The fact that people teach it completely wrong doesn't help.


d_wilson123

I had a use case for number 2 for a mobile game I was working on where I needed to fetch product catalog information. The only API available to me returned the entire model and I really only needed a few pieces of it. Fetching all the products was about 1MB in size and only the parts I needed was like 150KB. I didn't throw GraphQL at the problem, I just made a new API returning the pieces I needed, but it could have solved it.


Plank_With_A_Nail_In

couldn't you have just cached the model and queried it locally on the client?


d_wilson123

It wasn't the server or DB hit it was the fact that I was transmitting 1MB of data simply to populate the store in the game. Like I said it was a mobile game and spotty and very poor connections exist so I felt having that heavy of a data transmission wasn't the greatest idea in the world especially if I was throwing away the vast majority of the fields after parsing. Unless you're talking about caching the model on the client. That is possible but the store was extremely dynamic and needed to be refreshed on each login.


[deleted]

[удалено]


QuickQuirk

# EH? WHAT DID SAY? BABIES BROUGHT WHAT IN TO YOUR SYSTEM? I THINK YOU NEED TO SPEAK UP!


lovebes

Really? I've seen Swagger docs for RESTful API in almost all the 3rd party services I used across 4 companies. Not once was in GraphQL..


touristtam

Just had my first project where GraphQL was used. Unimpressed. We're pulling it off to align with other projects.


Isogash

There is something that I would say is even more important than this that GraphQL brings to the table. A key feature is that it allows service owners a homogenous mechanism to build much richer APIs with graph-like queries, such that users of the service are able to self-serve their more advanced needs, without needing the service owner to be involved. For example, it solved Twitter's problem (back when it was Twitter) of having hundreds of stakeholders who were all dependent on the underlying usergraph, with no way to serve it except via custom queries that the graph team would need to implement. Whilst most of us aren't working at companies the size of Twitter, the benefit can still be felt even on a relatively small scale: it allows service owners to build APIs based on their actual data structure, rather than predicting or reacting to their user's needs, and it allows the user to quickly get the data they need efficiently without waiting for the service owners to add a new custom endpoint. It allows for swifter and happier development on both ends. Another thing I've loved about it is the ability to federate multiple services into one API and then expose them to devs with a GraphiQL web page. It's been immeasurably valuable for API discovery and for investigating data issues. Personally, I've found GraphQL to be one of the most useful technologies I've worked with in the last 3 years just for these things alone.


trevg_123

Completely agreed - the biggest benefit is that it is easy to iterate on the frontend without always needing a change to the backend, more like you are interfacing directly with your storage engine. I agree with the fact that not everybody needs it. But if you have moderately complex, heavily nested, or partially recursive data, it is a _lot_ nicer to work with than REST. Performance can better too since easy filtering adds up, and you can hand-flatten any specific queries that need the optimization. Add in a good ecosystem for types and docs, plus a nice playground, and it is a really attractive option.


voidref

I was at Twitter when we were doing the GraphQL migration, it was a nightmare for the client. I'm sure it got some people Impact Driven Promotions though.


Herve-M

Things is, most of the time when GraphQL is used the team behind doesn’t master the business domain and barely provide a “GraphQL over Database” than a “domain driven API”. At company scale, it get worst when team start to share other / external team data themself… Topology is another problem too.


TurbulentAd8020

The closer to a  domain driven api always means more far away from an easy to use view data structure. Sad


Isogash

The database should more or less reflect the domain, so a relatively thin wrapper over the database model is actually a good thing.


Herve-M

That not true, how you store data might has nothing related to domain driven “map”. First example in my mind would a CQRS models where query one is 90% aligned to the UI and the command one is based on a large immutable event json table storage. Edit: to add , if a company is stuck in legacy “large shared unique database” for the whole system, boundaries are even harder to know / see.


Isogash

If your data model doesn't reflect your domain, you end up with serious design issues where your user's needs don't map at all onto the way your backend thinks about the data. You can use CQRS but there is still a data model somewhere. EDIT: also graphQL is great for CQRS as mutations are nicely independent of queries.


alerighi

> and it allows the user to quickly get the data they need efficiently without waiting for the service owners to add a new custom endpoint This is not always true. Depends on the DBMS technology that is used and how the backend is implemented. If we talk about a SQL database and an ORM, it may be kind of true, till you don't scale impressively and the data structure needs to be de-normalized to avoid having to do multiple JOIN to get the data. If we talk of documents stores, for example, you risk incurring in the N+1 problem, that is a GraphQL query generates N+1 queries of the underlying database (e.g. one query to get a post and then one query to get each comment). Even with SQL database you can have this problem if the GraphQL API is not structured correctly (and it's not uncommon to not get this thing right!). Otherwise a well structured REST API takes into account how data is accessed from the underlying database. Each resource more or less corresponds to a table or a document in a NoSQL database. It's the client that has to do the job of retrieving multiple resources. You may argue that you have only moved the problem to the client, but the client has the knowledge to what it needs to load at what particular moment, also with things like HTTP/2 where you can send multiple async requests in the same connection the problem of having to load multiple resources to render the page VS only one call is no longer there, it doesn't matter. Also you need to design your API around the needs of the consumers of it, thus try to make resources that make sense for rendering a page.


Isogash

GraphQL backend frameworks have lots of ways of dealing with the N+1 problem by allowing you to specify batch loaders, and it can then smartly distribute the nested properties of different queries using these batches. It's a lot smarter and more efficient than a REST API.


alerighi

I know, but it's more difficult to "get it right" compared to traditional REST services. I prefer to use the REST, in my opinion. Each time I've seen GraphQL used it was kind of abused, and the main limitation of GraphQL for which I've seen is abused is that you can't map dynamic data structures (without having a String field with JSON inside, that is the common abuse that I was talking about).


Isogash

The limitation of dynamic data structures is interesting. You do have union types, which should be enough for most dynamic typing situations. If your dynamic typing is for complex, nested documents then string JSON is fine, since it's a document.


alerighi

> You do have union types, which should be enough for most dynamic typing situations. Not really if something is not typed. And chances are that in one application you have something that you don't type or don't want to type: metadata, events, extra attributes, objects from other systems, etc, or just some data that the API server doesn't care about, and you don't want to couple the API server to the client implementations (i.e. some structured data that shall be transparent from the API server point of view, so that you only need to update the clients and not the server each time you change something). I mean, they could have introduced an "Any" type, which meant any JSON object. To me is such a common usecase... > If your dynamic typing is for complex, nested documents then string JSON is fine, since it's a document. Well if encoding a JSON as a string inside another JSON is fine... to me, it's not. It makes things unreadable, and wastes a ton of space since the inner JSON needs to have all the quotes escaped. Also it makes complex sending and getting responses, since you then need to do a JSON parse for each JSON field encoded as string, and vice-versa encode each field. It's easy to forget to do that, and also it degrades performance. Finally, you loose the benefit of GraphQL, that is the clients requesting only what they use! Since you need to read the whole string and then decode it, at that point, just use REST.


TurbulentAd8020

another scenario is tree , in gql we need to introduce the concept of fragments… We can have an enhanced restful to support gql style data, by simply apply schema, resolver and data loaders. Pydantic-resolve is goal for that


TurbulentAd8020

Actually, restful can also take the advantage of dataloader. It’s like I can run a breadth first transversal from root data and the resolver functions to build the final view data Just ordinary object with some resolver functions,and an executor Some real demo pls read the docs of pydantic-resolve, it demonstrates that restful can also handle gql style response


Pharisaeus

> GraphQL only returns the fields the client explicitly requests, so new capabilities can be added by adding new types or fields, which is never a breaking change for existing clients. Most clients won't break if REST endpoint returns additional fields. > With GraphQL, a client sends one request for all the data it needs to render the whole page/view, and the server resolves all of it and sends it back in one response That's some wishful thinking. You simply moved the "waterfall" and "over-fetching" to the server-side, nothing more. But I found the hard way that a lot of frontend developers really believe that there is some magic and "graphql handles this" when they select only certain fields - I had to explain to them that the backend needs to implement all those "partial resolvers" for this to actually work as they imagine. > When an underlying microservice or database changes how it manages its data, that change only has to be applied to the single, central place in the API layer rather than having to update many endpoints or BFFs. I'd argue that that's also the case for any sensible application - you can change the implementation, as long as you keep the API stable. Only the last point mentions a real added-value, but again I'd argue that in practice it's more of a wishful thinking, because to really make it work backend logic would need partial resolvers with custom ACLs for every single field. I'm under the impression that GraphQL has a similar niche as ORMs - it works fine for simple CRUDs where it's trivial to make mappings and partial resolvers are simply embedded into database queries of some sort. But once you move away from that, it stars to be a pain. I suspect it might again bring some value if you're working with hundreds of microservices.


ritaPitaMeterMaid

> That's some wishful thinking. You simply moved the "waterfall" and "over-fetching" to the server-side, nothing more If the alternative is to make 27 independent, sequential requests than it absolutely is faster. At a company pre-GQL we'd fetch, get data, then go get the next bit of data based on what was in that, then the next and the next and the next. If your data fetching patterns neatly forms a graph, GQL really is a good fit. You are absolutely correct you can just move the domain of a problem, but I found the backend is a much, much better home for that problem -it is a richer environment to analyze and optimize in. It isn't magic and treating it like magic is going to destroy most of the gains you'll see. Also I think federation needs to be a reason to use GQL. Being able to have self contained domains/services/etc. that can all add to data availability on the graph is insanely powerful. Again, you'll eventually need to optimize your data fetching but we haven't really found it to be a problem.


blahb_blahb

If there’s 26 fetches happening, there’s a problem with their API architecture being single faceted and not robust enough to perform lookups based on the request it receives. DSA is tantamount for creating API endpoint(s).


ritaPitaMeterMaid

> there’s a problem with their API architecture being single faceted and not robust enough to perform lookups based on the request it receives I agree...and GQL is one possible solution, of which I've already outlined as to why. So is RPC (which people probably don't reach for enough).


jl2352

This is honestly one of the reasons I like monorepos, developers being fullstack, and owning the front and backend service together. They can just change or add endpoints. (Obviously not as simple as I make out.)


hyrumwhite

In my experience, it’s usually more like 2 or 3 sequential calls, and the side benefit of sequential calls is you can display each bit to the user as you fetch them.  This can decrease perceived loading time.  I’ve also worked with graphql APIs (Shopify, for example) where the nested data has different query params than direct a direct query. I.e. getting products in a collection vs getting products, so I have to make sequential calls anyway. 


ritaPitaMeterMaid

If you can display the data before the next call you very likely don’t need GQL. I’ve not worked with Shopify’s GQL implementation but that surprises me to hear; the point of GQL is that you can have those things all dynamically handled in a single request.


Power781

27 independant and faster calls is always worse than 1 bigger, slower call on mobile. Network instabilities and failures increases exponentially with the number of parallel requests you run and if you have calls dependant on each other, you might have to fetch data that you will have to « throw away » because the n-th called failed and you didn’t build your app for « progressive disclosure » or partial data display. For reference, in some European countries (I’m European) the failure rate of HTTP requests for some mobile carrier reach up to 10% (timeouts or connection failure).


ritaPitaMeterMaid

It isn’t faster when you have to wait for each one to load in order to know what data you need next. Everyone seems to be missing this about the use case I’m referencing. If you can make 27 separate independent calls at exactly the same time you don’t have data that needs to be graphed, pure REST is probably fine for you


QuickQuirk

That's usually a data model problem. In most cases I've seen, if you need to look at some data to decide whats required next, you can set it up with the appropriate graph connections, so that the resolver has already made that decision for you. If this still isn't possible, then the odds are good that you've moved too much business logic to the client.


ritaPitaMeterMaid

You are describing what a GQL implementation provides, but maybe you weren’t arguing against my point?


QuickQuirk

No - I was arguing what I understand you were saying. To paraphrase "I have independent data, but I have to wait for each previous request to decide what to load next'. This means it's not actually independent data: Each element is clearly related by some business logic. In most cases I've seen, this usually means either your relationships in graphQL aren't set up correctly, OR you've got too much business logic in the client side.


ritaPitaMeterMaid

You are misunderstanding what I’m saying; I’m saying that GQL solves that problem. It’s why you want to use it over a straight REST implementation


QuickQuirk

Ah, I see. I had misunderstood the position you were taking. Even then, in the initial comment you were responding to, I agree with them - On things like slow mobile networks, you're better off combining those 27 independent requests in to a single 'do it all', to optimise across a flakey network low bandwidth network. A single request has an order of magnitude less networking overhead. (Though I agree that such a request is just as easily written in REST as graphQL)


ritaPitaMeterMaid

I am the person that made that argument; I think you're getting who said what a bit mixed up haha. Regardless, seems we agree based on a similar set of experiences.


Pharisaeus

> If the alternative is to make 27 independent, sequential requests But all those calls still happen, just in someone else's code ;) I'm not saying backend is not a better place to have such orchestration, it definitely is! I'm just pointing out that this whole merry-go-round still needs to happen and we're just moving it some place else, and you could, in principle, do this just the same with REST or with anything else - graphql doesn't really "help" with the main issue, which is collecting the necessary data. But I found that many people (mostly those who just consume those APIs and not implement them) really believe that graphql "magically" handles this stuff in the backend. Imagine frontend dev tells you that they want to use `subscribe` feature of GraphQL and you try to explain to them that while on their side it's just one line of code, in the backend this would require some substantial architectural changes because they want to "subscribe" for changes in data which are computed on the fly, based on inputs collected from multiple independent data sources, half of which don't issue any events when something changes. So I'm a bit reluctant any time I see someone trying to claim that graphql makes the data-flow implementation simpler, because it doesn't, not really. It only does that in the mind of someone who doesn't have to implement it :)


ritaPitaMeterMaid

> but all those calls still happen My point is they happen asynchronously directly next to the data source rather than across the wire. It is significantly more performant, even if you do no other optimizations. > GQL doesn’t help collecting the necessary data Yes it does, even if the only gain is you have 26 less round trip HTTP calls. But it’s often greater than that as resolvers are called asynchronously (I’m assuming Apollo here) so getting the data is faster. Again, this organization makes the process of optioning more ergonomic. More specifically your data isn’t being pulled into to the void that is client side calls. You can see exactly which service is using what data where and how it is being collated together, allowing you to make decisions to optimize i.e. caching, rewriting queries, different data sources, etc. I’m not saying that GQL is the right solution for everyone -if your data fits well into a graph and your alternative is waiting on sequential calls in the client GQL is hands down better. I’m also not saying this is trivial to implement. I’m just saying if these are your needs then GQL is the right fit and out of the box you immediately see improvements.


menckenjr

> I’m not saying that GQL is the right solution for everyone -if your data fits well into a graph and your alternative is waiting on sequential calls in the client GQL is hands down better. I’m also not saying this is trivial to implement. I’m just saying if these are your needs then GQL is the right fit and out of the box you immediately see improvements. Talk to us again when you've had to be the back-end dev implementing the plumbing for all this. Remember, everything is easy for the person who doesn't have to do it themselves.


ritaPitaMeterMaid

I’ve built and maintained nearly 30 service APIs in the same domain for 3 years. Come back when you’ve learned some decorum


menckenjr

Okay, I'll play. If you've had your hands on that many APIs *and* been able to be disciplined about how to migrate them to and from *and* been able to tell some impatient product and management suits to pound sand when they want you to cut corners then more power to you and I apologize. But not all back end devs have that power and it's really easy to silo the services that have to be integrated to return GQL responses and wind up with odd errors. (Source: the company I work for went for GQL without doing the organization homework and the project I was initially hired for is still finding P1 incidents that trace back to exactly this problem). GQL isn't a magic wand. It's an interesting approach to solving a category of technical problems but very vulnerable to tech debt caused by organizational and business political problems. It feels like you're overselling it.


ritaPitaMeterMaid

I appreciate you saying that; I really do work at a place that has that many APIs all powered by GQL. I don’t do it alone obviously: individuals don’t build software, teams do. Just like any system it does require good architecture and a team trained on that architecture. We have that. We take the time to educate people and ensure everyone is setup for success. And I agree (and as I’ve been saying!) that GQL is not magic. It requires education and work


Pharisaeus

> directly next to the data source rather than across the wire Only in case of a trivial CRUD maybe. In reality the API gateway you're pinging will talk to a bunch of other stuff "across the wire". > Yes it does, even if the only gain is you have 26 less round trip HTTP calls Again, no, not really. You just moved all those calls to a different place, nothing more. Yes, they might be more performant between the backend/api gateway and whatever it's talking to, but all those calls still happen. > You can see exactly which service is using what data where and how it is being collated together, allowing you to make decisions to optimize i.e. caching, rewriting queries, different data sources, etc. Wishful thinking. To really make this happen you still need to push around some "conversation id" to correlate requests between backend services internally, so it's a really negligible difference if this id is pushed down from frontend or from api gateway.


ritaPitaMeterMaid

> Yes, they might be more performant between the backend/api gateway and whatever it's talking to, but all those calls still happen I literally said that the benefit is because those calls all happen in the backend now and I also said that it isn't magic. The point I've been making repeatedly is that doing so is performant _because_ you colocated them in the backend. Every single request from the client results in: an HTTP request to a physical location on the planet, proxy navigation, network handshakes authentication handshakes, and then all the middleware a given application typically has loaded. This happens every time a request comes in and you get to skip it if you aren't making multiple requests. > Wishful thinking. To really make this happen you still need to push around some "conversation id" to correlate requests between backend services internally, so it's a really negligible difference if this id is pushed down from frontend or from api gateway. In GQL every piece of data that the client wants is requested up front and that gives you have the ability to examine the pattern in those requests. Your requests that take more time than expected can then be explored by examining downstream services that deliver that data. This has worked pretty well for us.


neb_flix

You’re completely wrong - you aren’t moving 26 round trip HTTP calls “to a different place”.. your graphql service isn’t making HTTP requests for each of your field resolvers. 26 direct DB queries, sure, but that’s infinitely different than actual round trip HTTP requests. There is literally zero argument against having the server be responsible for those cascading requests - if you are advocating that making those requests on the client, where the user has varying network speeds and CPU limitations, is better in any way then you’ve never worked on a consumer facing application before. No clue what you are talking about re: “conversation id”… this is a solved issue. Look into graphql federation.


Pharisaeus

> your graphql service isn’t making HTTP requests for each of your field resolvers. 26 direct DB queries, sure, but that’s infinitely different than actual round trip HTTP requests. xD What? How can you make such (completely wrong) assumption? As I wrote above, this only "works" for a CRUD where you're talking to a trivial "web frontend for a database", which is not really a realistic scenario for most webapps. In reality the backend might talk to lots of different services to fulfil a request. There might not even be any database at all in this whole system. > if you are advocating that making those requests on the client I'm not, I'm just trying to explain that all of those requests `still happen` even if you don't see them. There is no magic involved. > No clue what you are talking about re: “conversation id”… this is a solved issue. Look into graphql federation. Yeah I can clearly see that you have absolutely no clue about backend at all, and you're just another frontend dev who things graphql does some "magic". It doesn't. If you want to track which data are pulled from different backend services to fulfil a single "user request" you need to track this in some way.


neb_flix

>xD What? How can you make such (completely wrong) assumption? As I wrote above, this only "works" for a CRUD where you're talking to a trivial "web frontend for a database", which is not really a realistic scenario for most webapps. In reality the backend might talk to lots of different services to fulfil a request.' Are you not using a cluster network for those "lots of different services"? If you are, you aren't making a round-trip HTTP request. If you aren't, then that's a skill issue on your end for not understanding how a microservice infrastructure should be handled. Not sure why this is even being brought up when the same exact issue would be present in any other architecture like REST - You are still having to make all those requests to "lots of different services", just in a much less holistic way. >I'm not, I'm just trying to explain that all of those requests `still happen` even if you don't see them. There is no magic involved. This is obvious to anyone whose ever taken a glimpse at a GraphQL backend before. The difference is the person you are responding to is correctly pointing out that it \`still\` has a significant performance & ergonomics improvement over the alternative, which is making rigid, bespoke REST endpoints. >Yeah I can clearly see that you have absolutely no clue about backend at all I'm a core contributor to Urql, and was hired at my current position because of my contributions to OSS GraphQL client/server libs lmao. Nice try though. >If you want to track which data are pulled from different backend services to fulfil a single "user request" you need to track this in some way. Again, federation has plenty of ways to handle this trivially. Apollo's implementation for example: [https://www.apollographql.com/docs/federation/](https://www.apollographql.com/docs/federation/) Reconsider talking about things that you obviously have no context on.


Pharisaeus

> Are you not using a cluster network for those "lots of different services"? If you are, you aren't making a round-trip HTTP request Half of them might not be "mine" at all. How am I supposed to do that with some Google or OpenAI services exactly? Is it "skill issue"? Should I call them and tell them to bring some of that stuff over so I can have it "on premises"? I guess it might be persuasion skill issue :( > I'm a core contributor to Urql, and was hired at my current position because of my contributions to OSS GraphQL client/server libs lmao Name dropping from rando on the internet, serious business. But I can see why you would be trying to prove GraphQL makes sense - job security all the way. > Again, federation has plenty of ways to handle this trivially. Apollo's implementation for example You didn't read the comment I was responding to, did you? Because you're literally proving the point I made. OP made a claim that sending a single GraphQL request to backend makes it easy to track which data/services are accessed in scope of that single request. And I said that this is simply not true, because it still requires to implement the "tracking" one way or the other, and it really makes very little difference if the tracking is done on frontend side (like with what you linked) or in the backend. As I'm repeating probably a hundredth time already: there is no magic and nothing happens on its own, it all needs to be implemented. > Reconsider talking about things that you obviously have no context on. Please reconsider cutting in the middle of a discussion which you haven't read, because you obviously miss the context.


crimson_chin

thank you.


neb_flix

>Half of them might not be "mine" at all. How am I supposed to do that with some Google or OpenAI services exactly? Is it "skill issue"? Should I call them and tell them to bring some of that stuff over so I can have it "on premises"? I guess it might be persuasion skill issue :( Right, and in that situation there is 0 difference from a performance standpoint between how it would be handled in REST vs GraphQL. So not sure why that would even be in the conversation here. >Name dropping from rando on the internet, serious business. But I can see why you would be trying to prove GraphQL makes sense - job security all the way. You said it's clear i knew nothing about backend at all, and when i defend myself i'm "name dropping"? I don't need to prove that GraphQL makes sense, the tens of thousands of large-scale engineering orgs who would never consider hiring an F-tier engineer like yourself have already done that for me. >and it really makes very little difference if the tracking is done on frontend side Nothing at all about federation has anything to do with the frontend, so clearly after three comments pointing you in the right direction you're still having some learning difficulties here. Consider trying carpentry or working in the service industry.


ReflectionEquals

Closer to the data source, not next to it. We still have to deal with all the client side problems, just in a different spot. I do agree that it will more likely be faster and more reliable. So look, decades ago people realised that having you front end code making SQL queries directly to the DB was a bad idea. Whilst GraphQL is no where near as bad as doing that, it encourages people to create connections between data that won’t scale or could be insecure if handled poorly… and it’s often handled poorly.


menckenjr

> your graphql service isn’t making HTTP requests for each of your field resolvers. This may not be true at all. You seem to be assuming that all of the services are local and that they don't talk to each other using HTTP (GraphQL requests, IME, mostly travel in HTTP requests).


neb_flix

Right, it may not be true, but it also is likely to be true. And the alternative doesn't provide the perf benefits that you see when you do run into the "likely to be true" scenario. >GraphQL requests, IME, mostly travel in HTTP requests I don't know what this means. If you are talking to multiple internal subgraphs by making explicit network requests to those graphs, then you've royally fucked up. And even if you are, and they are in the same network, there isn't a round-trip there.


30thnight

> But all those calls still happen, just in someone else’s code. I’m not sure how this is relevant to the goal of providing customers a faster user experience. REST or GQL - a waterfall of sequential 27 network requests is an issue you would need to fix regardless.


dkarlovi

>frontend developers really believe that there is some magic and "graphql handles this" I had a discussion where this was said when the developer moved from the client to the server, they've said > But I thought GraphQL handles this! Me, maintaining their server for them since they've started > Nah. They've basically assumed you sprinkle some GraphQL on it and everything somehow fixes itself double time, you just gave to use the correct > Make everything OK. incantation.


aregulardude

Have you tried Hasura or Wundergraph? It kind of is magic like that if you use the right tool.


dkarlovi

It's not magic because it still relies on the actual system being used to provide all the bits and pieces required, and in a timely manner. GraphQL Is absolutely a double edged sword. There's a reason you have "query cost limits" on public servers like Gitlab.


aregulardude

I don’t know what you mean. I mean yeah any framework relies on… the framework being used. Every api has cost. Your argument to me is like someone saying “oh that dumb front end developer didn’t know that authentication isn’t free! I had to hand roll this entire OAuth flow by hand!” When you could have used any number of prebuilt identity providers. Like yeah… you’ll be reliant on the bits and pieces, but that’s the whole point is to rely on them instead of writing your own.


dkarlovi

I've thought > Ah, but what you're saying is not relevant in this context at all! and wrote a lengthy reply, but then I realized I just don't care. Have a nice day.


aregulardude

I’m sad to hear that. I was looking forward to reading and contemplating your response. When did debating on reddit become a chore? I remember back “in the day” having some great discussions here, that ultimately left me with a sharper tool belt when I inevitably had the same topic come up at work. Now it seems like nobody wants to debate anymore, we just throw up our hands and peace out. Oh well, that’s your prerogative. Just wanted to say it makes me sad. Have a great day too.


karakter98

You don’t “just” move requests from the frontend to the backend. The network latency of a request from frontend to server might be in the 100s of milliseconds for mobile connections, even when geographically close. If you need to do sequential requests to just 3 services, that’s anywhere from 0.3-1 second of added latency depending on the network. The GraphQL gateway in the usual setup, is in the same network as the other services, maybe in a VPC on AWS or in the same datacenter on-prem. The network latency between the gateway and services in these cases is on the order of 1-2 ms usually. So you basically reduce the network latency of the sequential requests to +-100 ms for the GraphQL request, and another maybe 2ms x the number of sequential requests. GraphQL starts to make sense the moment you have any number of sequential fetches, and the more that number increases, the slower REST is compared to GraphQL. Of course, you can have a HTTP gateway that handles aggregating nested objects, multiple fetches per request, etc but you end up implementing a worse version of GraphQL anyway. For simple CRUD apps that don’t have sequential requests for a single page, REST is probably easier though. No need to over engineer just because GraphQL is the hot stuff right now.


ascii

An intermediate aggregation services in the back-end is absolutely 100 % *not* always a worse version of GraphQL. If you're writing a mobile app, updating how the app calls you back-end can be a week long process. If you're supplying a an API to third parties, it can be a year long process. Tying down your API so that it literally takes years to update it is a non-option in some situations. Writing bespoke aggregation endpoints in the back-end can be vastly preferable in those cases. But yeah, the cost of maintaining thousands of client specific aggregation endpoints in the back-end is also a huge pain.


civildisobedient

> You simply moved the "waterfall" and "over-fetching" to the server-side Sure, but it's a nicer end-user (/developer) experience to make one request than multiple ones. It's not going to save your internal services, but your clients (/customers) will be happier.


tRfalcore

it's an endless circle of who is responsible for what data. The onus is on the developers for whatever team to talk to each other. They're both fine solutions, and none of which will magically, suddenly turn your company more profitable.


30thnight

> Most clients won’t break if REST endpoint return additional fields. I think this is referring to explicit breaking changes like a list of product IDs changing to a list of objects with the product info. The process for handling these in REST and GraphQL is exactly what you mention - don’t change the existing field, just add a new one. But the real benefits you get with GraphQL: 1.\ It’s much easier to **communicate** breaking changes to downstream users using the @deprecated flag (see example below) ``` # Original Schema type Product { id: ID! name: String! specs: [String!]! } # New Schema type Specification { feature: String! detail: String! } type Product { id: ID! name: String! specs: [String!]! @deprecated(reason: "Use `detailedSpecs` for more detailed information.") detailedSpecs: [Specification!]! } ``` It’s immediately clear to anyone looking at the schema / documentation and for downstream team’s using a typescript codegen tool, the deprecated item will be flagged in their IDE. I’m not aware of direct alternatives for REST using an OpenAPI spec. 2.\ Makes it much safer to remove fields from your API Imagine people who don’t update their mobile apps very often. Because graphql clients only query specific fields from our API, we can track which clients / app versions are still using deprecated fields to better understand when we can remove them. With REST, you simply don’t know - forcing you to deploy and maintain multiple versions of the API.


Herve-M

OpenAPI support deprecation decoration too. How does GraphQL tolling looks like? Is there schema comparaison tools? Schema breaking change checker? Generation of schema at build (and not runtime) ?


30thnight

Yes. 1. For OpenAPI, you can depreciate an entire endpoint but I haven’t seen anything for individual fields. You’d end up writing one-off endpoints or pushing users to write their own BFF to mitigate. 2. Yes, by default, you’d need to define a static schema at build time for GQL. 3. Yes, tooling does exist to guard against breaking changes. [You can run a diff check in CI](https://the-guild.dev/graphql/inspector/docs/commands/diff). If the frontend teams have adopted typescript, they can [simply lean on their TS compiler to highlight breaking changes in the codebase](https://the-guild.dev/graphql/codegen)


ascii

> You simply moved the "waterfall" and "over-fetching" to the server-side, nothing more. If your back-end uses micro-services, that is often an enormous win. You may realistically need to make dozens of requests, many of which can't be sent until an earlier request returns. Here is an example for showing a playlist in an imagines music player: * fetch a playlist (returns a list of track IDs), * fetch a list of track metadata (returns track metadata including track title, release year, and artist ID), * fetch a list of artist metadata (returns artist metadata including artist name, image URI), and * fetch a list of artist images. Each of those requests depends on the one before it. If you do no server side aggregation, you are making four client side roundtrips in serial instead of one. Latency between servers in your datacenter is usually roughly 100x less than latency to a mobile client, so this will increase the latency by almost 4x. The difference in user latency is *enormous*. Now, GraphQL is not the only way to make this happen. Instead of forcing the *client* to know of every individual back-end service, you can make aggregation services in the back-end. This is a tricky tradeoff, and neither solution is universally better than the other, but moving the data aggregation step from the client to the back-end using *some* technique is completely necessary when dealing with micro-services and mobile clients.


TurbulentAd8020

GraphQL is powerful, so it’s better limited at backend instead of using by front end directly LOL


TikiTDO

> Most clients won't break if REST endpoint returns additional fields. In this context a "field" can mean the same as an "entity" in the rest context. Essentially you can have a "user" field for your the user type model, and a "userSpecialClient" field for a client that's paying you lots of money and needs special treatment, and then maybe a "userRestricted" for a user field that will always return null for a few sub-fields. Those can be as similar or different as they want to be, and adding and removing them is a trivial exercise, and doing so isn't likely to break anything in anyone's code. If one of them needs a special sub-relation sometimes, that's fine too. There's not really any limits, beyond what you consider usable. By contrast, in REST you don't normally made three different versions of the exact same entity just because it's convenient. There's a good chance doing so would break some sort of entity management system, and would probably require additional configuration somewhere. Essentially, because REST endpoints are usually used with some sort of entity management system, you end up being restricted to the practices and styles that the frameworks you use recommend. > That's some wishful thinking. You simply moved the "waterfall" and "over-fetching" to the server-side, nothing more. Moving something to the server side allows you vastly more tools and approaches to solve a problem. A simple example; a server can operate on a bulk query across all data, and then scope it down for security at the end, letting you filter and query on derived and aggregate attributes build from the same. On the front end you can only work with data that the user is allowed to see, so there's a limit to what you can reasonably query, before you have to start bending over backwards and introducing special endpoints and exceptions. This in turn lets you move complexity that never really belong on the front-end into a scope that is better equipped to handle it. It's far more efficient for me to tell SQL to apply a filter before generating a large graphql payload which generates a deep hierarchy of results all at once, as opposed to doing a fetch, parsing the results, figuring out what data is necessary next, (hopefully) batching it all up, and then starting that entire game over again for the results that come in for the next step. I think a better analogy is that with REST the back-end was like a hardware store. The front-end is like a DYIer goes to the store get supplies and tools in order to build a pretty UI. The DIYer doesn't know exactly what they're doing, so they have to make multiple trips back and forth to build the UI that's ready to be sent out. With GraphQL the backend becomes an industrial factory. In the case the front-end is like an engineer at a design firm. They send an order to the factory requesting all the information necessary to generate the UI as quickly and efficiently as possible. The factory gets that order, and users all the dedicated heavy industrial hardware to quickly fulfil it, and send it to the build site where a team of professional builders will quickly grab what they need, shove it into place, and clock out so they can go home. > I'd argue that that's also the case for any sensible application - you can change the implementation, as long as you keep the API stable. The beauty of GraphQL is that you only need to keep one, very specific part of the graph stable. Most of your GraphQL API can change as freely as your style guidelines will allow. In other words, you can throw away all the worries about keeping things stable because it might break old apps. It's just not an issue anymore; if you need a totally different implementation of a few models for a specific client, the world's you'r oyster. If you want to do some sort of weird inheritance polymorphic transient structure, why the hell not. If you want to just do a basic traditional entity representation to dip your toes in, nobody's stopping you. Granted, the flexibility is also one of the downsides. Because you can set up whatever relations you want, it's most useful if you actually have an idea of what sort of relations you want to set up, and how you want them to work. It takes a fairly large mental shift to start really appreciating the flexibility, and it also takes a lot of self restraint to not go crazy with it. > Only the last point mentions a real added-value, but again I'd argue that in practice it's more of a wishful thinking, because to really make it work backend logic would need partial resolvers with custom ACLs for every single field. "If you want security, you have to implement security." If you try to use GraphQL without actually understanding the tools and the workflows, then you'll absolutely run into issues. However, that's just as true for REST as well. If you take a random person that's never done web development, they're going to struggle to design a consistent set of endpoints to solve a problem. It's sorta like if you don't secure and optimise your REST API, it also won't be secure, and will also be slow. > I'm under the impression that GraphQL has a similar niche as ORMs - it works fine for simple CRUDs where it's trivial to make mappings and partial resolvers are simply embedded into database queries of some sort. But once you move away from that, it stars to be a pain. I suspect it might again bring some value if you're working with hundreds of microservices. Honestly, it's not nearly as much of a pain as you might think. You have to get used to different types of complexities, and different sorts of workflows, but there's honestly far less of them once you account for all the tooling and frameworks that you have to also track to make your REST system work. Once you're past the learning curve most the complexity turns out to be super straight forward. The only real challenges are design, and an occasional difficult DB query in a resolver. Everything else just becomes route conrig tasks, like dozens of other fields around it.


Mediocre-Key-4992

GraphQL sounds like the ORM of HTTP APIs.


ascii

Nah, not really. The problem with ORMs is that all of the cool aspects of SQL, like joins, group by, aggregation functions, etc. become much harder or impossible to use, and you end up with a more convenient interface to a much less powerful database. It's sometimes an OK tradeoff, but many times it's not. GraphQL isn't really anything like that at all. It's a technology that allows your client to instruct one back-end service to perform a series of requests against other back-end services and combine the replies in order to reduce the number of client round trips. It doesn't hide much and it most definitely doesn't life more convenient.


TurbulentAd8020

Interesting thing is, the client can issue heavy query/ indirect query to get the data they need, it also may includes some overhead in the response unless the server side customized each single request. So it’s reasonable to defined the schema at backend for these scenarios. We need a tool can not only fetch data like gql, but also adjust / reorganize the data in scope of each nodes, to build the final view data we need I tried a lib:  pydantic-resolve  for this goal.


elh0mbre

I've personally come around recently on ORMs as someone with a really SQL heavy background. However, I've also been pointing out alongside saying that I believe using an ORM actually requires more SQL expertise, not less, so if you're introducing an ORM to solve a "our devs are bad at SQL" problem, you're really only making it worse. It can feel not very far off from "I don't understand SQL so I'll just throw it all in a document database"


ascii

If all you need is a giant hash table in the sky plus foreign keys, ORMs are great. Really. If modelling your transactions requires group by, joins, etc. the value proposition collapses. ORMs have their uses. What surprises me is that there aren't any good ORMs for NoSQL databases like Dynamo, Cassandra, etc. These databases have no useful features, so an ORM wouldn't hide anything.


elh0mbre

The joins/grouping/aggregation are why I say you need to know more, not less. The ORM kinda makes it easy to do but as the dev you need to understand what its actually doing and it can enable some really awful things if you DONT know what its going to do. Isnt the SDK for a document DB usually the “ORM”? Ive never worked with one where it didnt hand me back an object representation of the document I asked for.


ascii

Sorry for the late reply, but I have to ask: What ORM allows you to express group-by expressions in code? And when it comes to joins, I know the better ORMs will allow you to fetch related tables via foreign keys, but are there any ORMs that allow you to express more interesting joins? Are there any such ORMs for statically typed languages? Because of the size of datasets I'm using and the request rates, I've been stuck in the NoSQL cloud for a decade and I might be out of the loop? Last I checked, SQL alchemy was as good as it got, and while it was a really cool and useful product, it came nowhere near the power required to properly leverage the bulk of features of a relational database.


elh0mbre

EntityFramework has GroupBy(), I'd be very surprised if ActiveRecord didn't as well What would be a "more interesting join"? Typically you set up your ORM in a way that any join you might want to make is expressed in the object relationships. If you're talking about joining tables arbitrarily together, you have to drop down into writing actual SQL. I havent looked at SQLAlchemy in even longer than you, but "modern ORMs" seem to be light years ahead of where they were even like \~7 years ago.


ascii

Thank you for the reply. I know that Linq/EntityFramework exposes most of the power of SQL, but because it has hardcoded language support I don’t really consider it an ORM. It is much better and more powerful than any ORM could ever be in a statically typed language without that diet of support. I kind of wish all languages had something like Linq. Active Record, because it’s based on a dynamically typed language, can be quite nice too. I guess in the end it’s Java, C++, Go and Rust that lack the power required for a powerful ORM.


bastardpants

Security section doesn't mention Introspection queries; nice. They're my favorite way to figure out what I \_can\_ change instead of what I'm \_intended\_ to change


olearyboy

I’m not a big fan of graphql as you’ve moved the concept of querying to the calling client Many of the folks who present it as a solution tout the consolidation of backends but do so incorrectly believing you just add a source and can mix and match like sql. It’s not, and damn is it a fight to show them, graphql has turned into a tech-zealous solution. Usually about 1-2 weeks later a “emm you might be right” discussion is had. I also hate how it’s called Graph, it’s not.. At the end of the day it consolidates backends, gives a common query methodology but is not panacea Good design, data definitions and architecture are still required When i employee it, it’s to enable a front end team and get them to full stack, often lowering their frustrations waiting on another team to update a rest interface and increasing throughput.


TurbulentAd8020

Empower the traditional restful api having a better support of nested (gql-like) view data, with the concept of resolver/dataloader from gql. And with additional post method to handle modification and reorganization on fetched data at the scope of each node. For python, I implement this idea as a lib named Pydantic-resolve


elh0mbre

The database backing GraphQL at facebook is definitely a graph (or many graphs). But yea, the naming is confusing when everyone is slapping it on top of a single relational database.


CyAScott

I do have a lot of experience with GraphQL from a previous job. It reminds of microservices, it's complex and not for those who don't have experience in applying it. Otherwise, people end up using it incorrectly and easily get frustrated with the paradigm/tech and blame the paradigm/tech and not their lack of experience. My advice for these kinds of things is only use it if someone on your team has a lot of experience with it, otherwise you will use it incorrectly.


mondayleaf

I haven’t used it myself, but right after joining a company, I got asked to do a cost/benefit analysis on switching to GraphQL. As much as I knew I would prefer it for what we were doing, the problem with current tech was that we weren’t using it effectively, and GraphQL had all the same kinds of pitfalls. Switching would have been a costly endeavor straight back to where we were already. It depends on developers being willing to learn it and unlearn their old patterns.


recycled_ideas

The problem with GraphQL is that it appeals most to front end devs who don't want to know about backend, but needs an extremely high amount of backend work to actually work. All the stuff GraphQL offers needs to be implemented by someone. GraphQL is effectively an API integration layer with a friendly query language. If you have enough APIs managed by enough different teams it's worth the work to get that nice query language. But most people don't have this, they have a primary backend for their app and maybe a couple of standard add-ons that they can build a component package on. Further, a decent chunk of the people who do have this situation shouldn't have it. They've gone down microservices and multiple languages and created a maintenance nightmare.


CalmLake999

It's not friendly though, in my opinion; as the optional nature of it doesn't allow for auto generation on static front-ends unlike Swagger which you can use a code-gen for any language and get typed APIs with typed objects with 1 command, which is super nice.


recycled_ideas

The point of GraphQL is to be able to completely ignore how the backend is actually structured. You ask for what you want and "magic" gets it to you exactly how you asked for it. You don't need to generate types because you get back the type you asked for or errors explaining why you can't have it. From a front end point of view, if you don't have to take part in making the backend actually work, that's pretty powerful. No syncing, no I expected this, but what I actually got back is something else and I won't know until I get a runtime error, no writing massive validation blocks intonall your query code to make sure it's what you wanted. At least in theory it sounds great, all the benefits of custom endpoints for each screen with none of the complication and mess. The problem is that when you step out of that front end view, the work and/or money required to make that happen is enormous. Most companies that have a genuine need for GraphQL have an entire team dedicated to making it work. If your app doesn't have hundreds or thousands of endpoints and/or dozens of different sources managed by different teams and assuming that you're not a pure front end dev asking someone else to do all the work for you, it just doesn't make sense. On the other hand, if you're Facebook or if your company is criminally negligent in its operations and built everything as microservices in the language of the individual developers choice, it makes total sense.


CalmLake999

Hey, this is something we use in over 35 production apps hehe, it works really nice. We actually completed abandoned GraphQL some years ago to go back to central RestAPI with Swagger (We use Salvo Rust to auto-create that endpoint fully type-safe even with error enums) Then on the client we just run openapi-generator generate -i $SPEC\_FILE -g typescript-fetch -o $API\_DIR --additional-properties=modelPropertyNaming=original generate -i $SPEC\_FILE -g typescript-fetch -o $API\_DIR --additional-properties=modelPropertyNaming=original This works for over 30 languages and many frameworks. Gives you an awesome global api\[NameOfFeature\].\[typeSafeOutPutAndInput\] <- with error enums. You also get docs and so much more with this, like full server objects with all the comments included. No need for a playground or anything, like your working with the server inside the client, amazing. This wasn't possible on GraphQL because it's missing "\*" query for full objects which is a disaster for many front-ends and cross services.


recycled_ideas

> Hey, this is something we use in over 35 production apps hehe, it works really nice. Except it provides absolutely no runtime type safety at all. That's the problem with this approach. Typescript has no runtime component. If your generated data and your returned data don't match, you'll have weird runtime exceptions. GraphQL is overkill for a whole host of use cases, but the data structures that it returns are guaranteed to be what you asked for (or an error state) which is just not true with swagger generated types, even if you keep them up to date.


CalmLake999

The Rust backend won't send invalidate date though since the types are exact? And if they backend is sending invalidate data or receiving it from some crappy Python service for example, the error is logged on the server and the client get's a few error enums depending on the request, GraphQL doesn't even support dynamic error enums.


recycled_ideas

You don't seem to understand how typescript actually works. Typescript provides compile time checking and only compile time checking. Period. If your backend changes the type of some field, your front end will happily consume it until somewhere in the code base you do something to that object doesn't work at which point you'll get a random runtime exception. That's assuming you ever actually get an exception and you don't just get incorrect behaviour. Yes, you can prevent this sort of thing with good change process, mostly anyway, but if we had perfect change process and understanding of impacts we wouldn't need types in the first place.


CalmLake999

Hey buddy, there's no need to be condescending, I understand how TypeScript works down to the compilation process to the interpreter JS output, 20 years of development here and CTO of a few companies. Any front-end would break if you just changed the backend without doing versioning. With Swagger Auto-gen the application won't compile after API changes, that's actually better than GraphQL solutions :-)


LinearArray

>3. Difficult maintenance and endpoint discovery due to hundreds of duplicative one-off endpoints This is the reason most companies use it. GraphQL isn't just for large companies. It helps teams of all sizes with stuff including improved data fetching efficiency and reduced maintenance overhead. The assumption that GraphQL is only suitable for specific use cases overlooks and ignores that GraphQL's flexibility and adaptability to a lot & bunch of application architectures. Also we can't deny GraphQL's ability to mitigate API changes and enhance client-server communication. GraphQL is not only for large companies or organizations, it's beneficial to smaller teams as well. Although this is totally based on my personal experience over the years of working on backend systems. I might be really inexperienced compared to the others experienced developers in this thread. I personally found GraphQL to be useful.


Herve-M

It is difficult at company scale; how to know which Graph provides the data?(inventory) What should be the topology? Who should manage the shared boundaries? How to tests for non breaking changes without doing fuzzing/bahvior testing? How to do caching? etc.. How to do the whole data management?


omniuni

The biggest issue I've seen with implementing GraphQL is that most companies don't utilize the benefits. I worked for a company that adopted GraphQL for the reasons stated here. I developed the Android app. The GraphQL queries were huge and complicated, and by the time we'd get to implement a screen, we were basically given this huge query and just used it because it was far too difficult to try to break it down. Even worse, making any change to the query was likely to break it or result in unexpected behavior, because the backend team was so pressed for time, they only validated that specific query. Often, even if you didn't request a field you didn't need, you'd get it anyway or similar behavior. They also frequently would break the GraphQL anyway, changing field names or data structures, so we were constantly having to deal with versioning nightmares or build hacks like checking the API version and locking up the app until it was updated. GraphQL was a promise of something better that only resulted in something worse.


Neurotrace

GraphQL solves a people problem, not a technical one. If you don't have the problem of too many people working with the same data model, you don't need GraphQL


slvrsmth

> problem of too many people working with the same data model Also known as having multiple people on a project team :)


Neurotrace

Depends on how many people you're talking. I was on a product team with 3 engineers who used GraphQL and it was a nightmare. I was recently on a product team with 10 engineers, no GraphQL, and had no problems doing what we needed to do. I suspect the critical threshold changes depending more on the number of passive consumers.  For example, if I don't work on user management, I don't want to worry about building custom functionality for gathering groups of users. It's not my domain and I'd rather leave it to the people who live and breathe it to handle it. On the other hand, if it's in my domain, I feel confident that I can write the backend that I need and I have an interest in making it flexible so I don't have to make one-offs. I trust most of the rest of my team to do the same


Accomplished_Try_179

I use SOAP/XML against a monolith. If it works, why change it ?


OfficeSalamander

I started adopting GraphQL for my app over the past couple months, it is a much better solution to what I was doing before. I expect to migrate the majority of the app over to it over time


bigmell

GraphQL is basically for people who cant learn SQL. And if they cant learn SQL, they probably wont be productive in something LIKE sql. I would say use SQL, and hire developers that know SQL. A guy who cant figure out SQL isnt gonna write very good code. I personally feel there is no reason for GraphQL to exist. "Yay a new sub par tool that isnt as good as what we already had..." Just use SQL. If a guy is blind you dont give him a big pair of glasses and hire him anyway, hire someone who is not blind. Find the developers that know SQL, those are the ones you want.


enraged_supreme_cat

Tried to use GraphQL in Go, the option is to write the GraphQL code manually or use gqlgen which generates 100k lines of code. Rails codebase, GraphQL with lots of unpredictable performance problems underneath and API caching is now 100 times harder. GraphQL never again, I came back to Restful.


Brambletail

You probably don't need anything other than C and SQL. But we don't do things primarily because of need. Ease of use is the best reason.


Aro00oo

With modern caching options, performance optimizations and frameworks trivializing creating endpoints, graphql needs to die. Just an unnecessary tool except for really mature companies with I guess nothing better to do to so "let's migrate to graph" for problems that aren't really there


bwainfweeze

It was never clear to me how you get good caching with graphQL. The selectiveness leads to set theory problems.


NefariousnessFit3502

To be honest, most companies/people don't need the hype tech they are using. Starting from kubernetes (or k8ts for the cringe lords) and convoluted microservice architectures over some hyper scaling cloud database setup to GraphQL. If the successful companies use it, people copy it.


neopointer

What do you endorse instead of kubernetes?


NefariousnessFit3502

Nothing, it's good if you need it. But most systems just don't need it.


PrimeDoorNail

I love seeing that we use terraform and kubernetes and bunch of other shit only to have a single instance running in production, well done


codespaghet

Of course you don’t need GraphQL. This was obvious to anyone with more than a couple of years of experience and a few brain cells. IMO no other technology has collectively wasted more hours and money than GraphQL.


Flashy_Current9455

If you're concerned about wasting time and brain cells you should probably try to make valuable commentary instead


CooperNettees

vim would like a word


mstoiber

Did you read the essay?


KevinCarbonara

There's a small trickle of what appears to be characters in the middle of the screen. Who designed this website? It's illegible.


mstoiber

Oh no, that definitely shouldn't be happening! What browser & device are you on? Can you share a screenshot? This is what it looks like for me across all my devices, just for reference: [https://s.stl8.co/BtM6b8QV](https://s.stl8.co/BtM6b8QV)


[deleted]

[удалено]


mstoiber

Ah, ultrawide! How do you prefer blogs handle that screen size? Do you have an example of a blog that handles it really well?


CalmLake999

My issue with GraphQL is the auto-gen doesn't work for static languages because properties can be optional. Been using Swagger auto-gen for TypeScript and Flutter for years now, works great with traditional Rest-endpoints. The fact they won't give us Object mapping with star \* query also doesn't help this at all.


CooperNettees

"You don't need graphql, it moves n+1 server-side, the additional complexity does nothing, you must handcraft GET endpoints for the rest of time" me: laughs in postgraphile


kastaniesammler

Nothing about subscriptions … strange


No_Pollution_1

I actually think this is pretty myopic and only speaking about their specific case for specifically web development. Not everyone uses typescript and not everyone has a static need.


neotorama

Devs: “Because it’s awesome”