Offset Considered Harmful or: The Surprising Complexity of Pagination in SQL
Posted by TheCrush0r@reddit | programming | View on Reddit | 128 comments
Posted by TheCrush0r@reddit | programming | View on Reddit | 128 comments
gadelat@reddit
https://use-the-index-luke.com/no-offset
plumarr@reddit
This solution has been known for more than 20 years and but it has failed at being largely known by developpers. I have no idea of why.
757DrDuck@reddit
They like distinct pagination
RiverRoll@reddit
I have to disagree with infinite scroll being a silver bullet as the author seems to think. It works well for news or user content for instance, but in some cases it can be a bit of a pain in the ass when you know what you're looking for is somewhere in the middle (like when you sort by price the items in a store).
MillerHighLife21@reddit
Came here to post it and you beat me to it.
EOengineer@reddit
Cool stuff!
pheonixblade9@reddit
wow, I actually passed the 5 question test. Love that site :)
jonny_boy27@reddit
This is probably the most comprehensive resource on this issue
ItsAllInYourHead@reddit
The thing is: offset pagination is WAY simpler to implement, design for, and use. And in MOST cases the duplicate result/skipped result issue just doesn't really matter at all. A user may occasionally notice some weirdness, but it's not going to have much of an effect. So it's a trade-off we make.
There certainly are scenarios where it does matter, but those are pretty rare in my experience. And in those cases you'll want to take the more complex route. But again, those are the exception in my experience.
eattherichnow@reddit
Gods that. Like there are places where it wins - mostly with extremely large datasets - but most of the time infinite scrolling and cursing based pagination is so annoying. What folks seem to miss is that the duplicate rows are actually a very predictable behavior. It’s easy to work with, and actually signals something to me. With cursor based pagination things get really weird.
And, yes, offset pagination lets me do a binary search, which is sometimes much easier than coming up with a good search query. It’s super useful. Don’t take it away from me unless you really, really have to.
Skithiryx@reddit
The problem with offset is most of the time not the duplicates (although if that matters for your use case, it matters). it’s that it fundamentally is really taxing on your database because the database’s only way to do it is to sort the entire table by your sort and then jump to the nth item.
On the other hand filtered queries make use of the indexes you hopefully have on the fields and filters first then sorts, which is more efficient because filtering things out is easier than sorting and skipping and then you sort the smaller set of results.
ItsAllInYourHead@reddit
I'll say it again: it's a trade-off. In the vast majority of cases, for your typical SaaS product or whatever that most people are working on, this just isn't consequential. It's not that "taxing" on the database in 99% of the cases. It's certainly not as efficient as it could be, sure, but it's rarely so problematic that it's causing you database issues or noticeable regular performance problems. And if it is, THEN you generally make the extra effort to use a different tactic. But it's usually just not worth doing that up front.
BenediktGansinger@reddit
Well it's always the same: it's fine until it isn't. And then it's a pain in the ass to change.
The proposed solution is hardly any more difficult to implement... instead of the page number you just pass the last value of your current page:
However, you can only do first/previous/next and can't jump to arbitrary page numbers so it definitely has some drawbacks. DB support for multiple search columns also seems limited.
It's definitely a more sensitive default way to do pagination.
Freudenschade@reddit
Exactly. Anything more than a couple million rows and performance tanks. I made that mistake the first time I implemented something like this, running the DB out of memory since the result set was so huge. I even dealt with it today since another team did exactly this, which ended up putting a lot of load on the DB. It really doesn't scale at all, despite its simplicity.
aueioaue@reddit
Not a database person, but indexing doesn't handle this? I imagine this would amortize the pagination costs over the insertions, or inject a pipeline step where insertions queue a deferred update to an eventually-consistent index. Then with an index in hand, traditional pagination should be trivialized.
grommethead@reddit
Articles titled '[Something] Considered Harmful' Considered Harmful
fredlllll@reddit
so how else are we supposed to do pagination then? the solution in the article would only work for endless scrolling, but how would you jump from page 1 to page 7?
Jolly-Warthog-1427@reddit
I like the approach to order by id and then select * where id > 0 and ... limit 50
On the next round add the max id you fetched to the query. So
select * where id > 87234 and ... limit 50
That is really quick in most databases as it can just look up in the index where to start. O(log n) time to find the start position and from there just walk up the index.
By using offset you quickly get to O(n log n) as you have to traverse through the entire database (within the where filter) to fetch the latest page.
myringotomy@reddit
This doesn't answer the "how to get to page 7" question though. Also IDs are great but the problem gets a lot more complicated when you have a sort as in.
Sort comments by best and paginate fifty at a time.
It gets even worse when there are filters.
Jolly-Warthog-1427@reddit
Just to ask, in what situations would you want to get to specifically page 7 without ever touching the first 6 pages at some point?
myringotomy@reddit
I'll give you an example from real life.
There is a web site which lists documents in alphabetical order. I don't know which page contains the documents that start with the letter K. I paginate by hitting the highest page number listed on the bottom of the page until I hit the ones that start with K and then when I inevitably overshoot go back by minus two or three pages.
Jolly-Warthog-1427@reddit
In situations like that you mostly have a limited number of things to look through. Say in the hundreds or thousands.
You would not jump to page 3 127 331 on google searches.
You dont need pagination for thousands of entries. You need pagination for millions.
I agree with you for things like your contact list or friend list on facebook for example. But for say the user overview for admins on facebook, or the general accounting ledger for facebook, both with many millions of entries. There you either apply filters to properly limit down the search to tens up to thousands to get to this basic example or you need proper pagination.
skywalkerze@reddit
The smart thing to do for the developers of this app is to provide a way to search for documents by name, or by a substring of name, or by the starting letter.
You don't want to find page 214. You want the letter K. Page 214 is a workaround that you found because the devs didn't provide the right tool. The solution is not to ask the devs to provide the workaround, but the right tool.
It depends on the specifics I guess. If getting to the page you need by numbers is reasonably fast, that is fine. But in the general case, it could be millions of rows, and the database needs to go through them all and discard them, just to get to page 5346, taking minutes. That's not a good solution, even if it works.
myringotomy@reddit
Sure. But they didn't so I need to jump to pages non sequentially which is what you were asking.
That's just my usecase too, others may want to be looking for documents sorted by earliest or the latest edit or creation time. Let's say I wanted to see the documents from 2021. I sort by date and then jump around. Same story.
azlev@reddit
The documents that start with letter K is relatively easy. You can put the letters in a visual way like phone contacts do when you scroll down.
The hard part is relative position like "page 7". You can get some approximation if there is a monotonic index, but the precise answer need all the seven pages.
jailbird@reddit
Well, the answer is always porn. You know exactly on which page of your favorites list is a certain video you want to see and want to get there ASAP.
solve-for-x@reddit
This can happen if you know the item you want lies somewhere between the first and last pages of results. For example, you know the item you want begins with the letter M, but the app you're using doesn't allow you to search alphabetically and just returns all results from A to Z organised into chunks of N results per page. So you would typically start by looking at a page near the middle, then perform a manual binary search until you find the correct item.
In theory, apps should give you the search tools you need but often they don't. And then an "infinite scrolling" type of pagination will frustrate your ability to use binary search to home in on specific results.
skywalkerze@reddit
The question is "why should the devs provide pagination". Your answer is not a reason for the devs to provide pagination. It is a reason to provide search by the start letter, or such. It makes no sense to say "devs should provide pagination because we know they're not going to provide the search I need, and I can use it as a workaround". As long as we're telling them what they should put in the app, tell them to put in what you need.
Worth_Trust_3825@reddit
This only works if your ids are incremental.
BaNyaaNyaa@reddit
If works if your ID is sortable (which it should be if you can create index, which you should). It doesn't have to be incremental.
However, it means that if you only use the ID to sort the data you display, new entries will appear randomly in each pages, instead of appearing only on the last pages or the first pages depending on the direction of the sort.
It can feel weird, but its fixable if you sort on another column, like the creation date. It should look like:
Your pagination token would then be (creation_date, id), or a serialized version of this information.
Internet-of-cruft@reddit
The property you need is monotonically increasing.
As long as the keys increase and never decrease, you're good to go.
BaNyaaNyaa@reddit
Right, but even if the key doesn't increase monotonically, it doesn't "break" paging per se. The fact that new entries appear randomly on each page is not a behavior that is strictly undesirable.
yasamoka@reddit
UUIDv7 addresses this.
OffbeatDrizzle@reddit
As did version... 1... lol
lturtsamuel@reddit
This looks like something that can be done automatically, why don't database just implement it?
doterobcn@reddit
What about every time you need data sorted by some field where ids are not sorted??
BigHandLittleSlap@reddit
ASP.NET OData does this by default.
amakai@reddit
Pretty much every large system does this. There's an even more generic approach to this though - instead of returning an "id" to start from, you return a generic "cursor", which from client perspective is just a byte blob they need to pass back to get the next page.
The reason for this is horizontal scaling of databases where your ids are sharded into 100 different instances of the database, and you do not want to scroll through them 1 at a time (as it would result in first one being very "hot" because everyone looks at it). Instead you send a request to each shard to return 1 row back, and thus get 100 rows to send to the client. But now you have 100 ids to track (one at each shard). So you serialize them into "cursor" and send that to the client. When client gives it back - you know how to deserialize and restore position in all underlying "streams".
Midoriya_04@reddit
How would one implement this?
flingerdu@reddit
You choose a solution that offers this out of the box and save yourself the whole trouble.
Midoriya_04@reddit
For production yes. I'm still learning so I was curious on how to actually implement it haha
My project is a doordash clone so I currently have an API that just returns all-restaurants/dishes etc. Was thinking of implementing pagination there.
ffxpwns@reddit
Check this out. I didn't read the full article, but I skimmed it and it seems to cover the high points!
Midoriya_04@reddit
Thank you!
Drisku11@reddit
Take your 100 IDs, concatenate them into a byte array, and base64 encode it to send to the client. Optionally AES encrypt and add an HMAC before base64 encoding so the client can't muck with the cursor; it can only give it back to you.
amakai@reddit
I would also add one byte for version meta-field in front in case you decide in the future to change the format, but other than that - this is correct answer.
jkrejcha3@reddit
It's a pretty common practice. A good example is Reddit's API does this (IDs are just numbers in base 36) with the
after
andbefore
parameters in listings, but as the grandparent points out, this means that to get to "page 7", you have to make 7 requestscarlfish@reddit
If a user wants to jump from page 1 to page 7, it's inevitably because you're missing a better way of navigating the data. Like they want to skip to items starting with a particular letter, or starting at a particular date, but there's no direct way to do so, so they guesstimate what they are looking for must be about so-far through the list.
remy_porter@reddit
Usually, if I'm skipping large amounts of pages, it's not because the UI doesn't let me refine my search- it's because I don't have a good idea of what I'm searching for.
sccrstud92@reddit
Why not go through pages one at a time? Why go to some random page in the middle?
TehLittleOne@reddit
There are times I do it and I am basically not sure where the info I want is but I know it's not the next page and know it's not the last page.
or example, if I'm looking at a list of movies ordered by date for the last 20 years and want to find something from 2017, that's probably somewhere a little less than in the middle. I don't know exactly where so I'll try and guess somewhere and basically binary search it manually.
sccrstud92@reddit
This is a perfect example of what /u/cartfish was saying. If people want to find a movie from 2017 the UI should let you filter by year or by a range of years. If a user has to manually binary search through paginated results that is a UX failure.
TheRealSplinter@reddit
Filtering data by year (or in general) is not guaranteed to remove the need to paginate the results
sccrstud92@reddit
It is not supposed to removing pagination entirely. It is supposed to reduce the result set to a size where you can exhaustively search the result set using "prev page" and "next page" buttons, i.e. a few pages of data. Additionally, it should reduce the result set to the point where there is no benefit to skipping pages. People skipping pages because they are performing a binary search on the results (at least this is the only scenario I am been presented so far). This implies that the results are ordered, and that they know the value of the ordering field on the result they are looking for. As long as users can filter on that field they will never need to binary search on it.
TheRealSplinter@reddit
I think these are assumptions that will result in annoyed users in some case if page numbers are removed. Sometimes users don't know with enough precision what they are looking for. Sometimes data isn't evenly distributed across the sort/filtering range. Sometimes users want to browse/jump to some extent once they have results without having to come up with a new filtering range. Many users aren't browsing with as much purpose as "I'm binary searching the results".
sccrstud92@reddit
Sorry for being unclear, but in this scenario "the user wants to binary search the pages" was not an assumption, it was stipulated here. I totally am onboard with the possibility of there being other scenarios where it is valid to jump to a specific page, which is why I specifically asked for such scenarios here. It just so happens that 2 of of the responses, or maybe all 3 of them, said they do it to binary search the pages, and we are in one of those threads now. If there is another scenario that you think is valid that isn't a binary search I would encourage you to start the discussion up there so this specific thread doesn't spiral off.
TehLittleOne@reddit
I can get behind that. Sadly most UX that I've come across do not allow such complex filtering.
It's worth noting that a user does need to know it's 2017. In reality, I would probably know it's a few years ago and peg a range like 2015 to 2019 and sift through a little more. A better subset for sure but not enough to remove needing pagination of some sort.
sccrstud92@reddit
Yeah it won't necessarily eliminate pagination, but it should cut the result set down far enough that you can do an exhaustive search through the result set, which only requires prev/next page functionality, not "jump to page" functionality.
Raildriver@reddit
manually binary searching
lord_braleigh@reddit
That sounds like searching by date. Why use pages instead of dates?
brimston3-@reddit
Because many, many people are very bad at remembering or even estimating dates, but very good at remembering approximate relative chronology of events, even if they don't remember keywords that could be used to identify those events without seeing the description (and contextual events around them).
And that kind of imprecise criteria is just hard to bake into a search query.
lord_braleigh@reddit
A date is a number, and a page is also a number? I don’t see why you prefer arbitrary numbers to numbers that have real meaning.
chucker23n@reddit
This.
For example, suppose a site offers a list of stories, ordered alphabetically. You can navigate by first letter, but that’s still a dozen pages. You cannot navigate by second letter. But you can estimate from where the first page for that letter ends whether you need to go to the next page, last page, or somewhere in the middle.
Rinse, repeat.
sccrstud92@reddit
You can't binary search for something unless you know the value for the ordered field. In the example I asked about the user did not know what they were looking for, so an exhaustive search is the only way to guarantee you can find it.
remy_porter@reddit
Because I know it’s unlikely to be at the beginning or the end. I just don’t know where it’s.
SilasX@reddit
For me, it's because, say, all the results are appropriate, and I know I've already looked at the first, say, six pages of them. Like, when looking through saved links on one of my tags in the Firefox Pocket app.
Yeah, in theory, I could "just" say, "okay, hm, you've ordered it by date, I've looked at the ones that I've saved up to ... hm, how do I look up saved dates? Oh, there it is. Just give me the ones after 01/24/2021".
Or, you know, you could just ... let me click "page 7". Which I can't do because of your stubborn insistence on using infiniscroll. Thanks for unsolving a well-solved problem.
carlfish@reddit
Yeah this is a valid use case. "I can kind of place what I'm looking for as "before x" or "after y", but I won't know what x or y are until I see them."
KevinCarbonara@reddit
It's wild to say this in response to the alternative being endless scrolling
amakai@reddit
Endless scrolling is not the solution. Good filtering and providing good breakdown of data is the solution.
NotGoodSoftwareMaker@reddit
Endless scrolling is the solution because then you frustrate the user and they give up on your product. Therefore the problem of finding the data efficiently has been solved
/s
carlfish@reddit
I love how you cut the quote off right before I give examples of alternatives to endless scrolling.
PangolinZestyclose30@reddit
Where do you get this certainty?
I often use jumping between pages for exploration. I want to get a feel for the data set, I see there's way too many items, I sort by e.g. date and then sort of jump through the timeline. Often I can start seeing some patterns this way.
gurraman@reddit
Or if you want to pick up where you left off.
vytah@reddit
You can't pick up from where you left off if the webpage uses offset-based pagination. When you come back, everything will move around, and depending on the page, you'll either have to reskim things you've already seen, or miss things you haven't seen yet.
himself_v@reddit
Depending on the data, you can. Forums do this with long comment threads just fine.
vytah@reddit
You're reading the page 5. You close the window. A moderator deletes all posts on pages 2 and 3. You come back. Not only none of the posts you're seeing now were on the page 5 before, there are also unseen posts on page 4 now.
himself_v@reddit
The way it works in those forums is deleted posts leave a "This post has been deleted". This is both useful by itself and protects against pages shifting.
But even in places where the pages can shift, sometimes that's acceptable and more intuitive than what the author suggests. Depends on the use case.
beaurepair@reddit
Obligatory "fuck endless scrolling". Such a terrible design on web that makes navigating around impossible
Dustin-@reddit
I think being oblivious to the users' use cases is basically the only requirement of being a programmer.
nermid@reddit
That's not true. There are plenty of them who seem to clearly understand the user's desires and deliberately subvert them.
amakai@reddit
If the software you are building is simple (think mailbox), then this is not a reasonable request.
If the software is complex and data-specific (think some analytical tool), then it's reasonable to request exactly this - breakdown of a dataset - as a separate feature in the UI. For example, you could show a box with top 10 values for each important key and number of elements for that value. Something like "2022-01-01 - 15 rows, 2022-05-07 - 8 rows, 2022-02-03 - 3 rows", then user can click on a specific date to add it to the filter.
But again, every software is different, and UX designers should understand what is "a feel for the data set" and how to provide it to the user without them having to open random pages.
PangolinZestyclose30@reddit
Why not? A lot of software provides this out of the box without even having to ask for it.
Yeah, so you'll build some complex and hard to use UI just to avoid paging. Strange.
Yeah, and that's why paging is great since it works for pretty much any software. No need to learn specific UIs as a user, I can just jump through the data.
d1stor7ed@reddit
There really isn't a good way to do offset paging once you have 10M+ records. Trust me on this. It will seem to work well until you get into the later pages.
Uristqwerty@reddit
I'd assume offset is still better than repeatedly querying for the next page, so an efficient system would combine the two when appropriate. That way, jumping from page 20 to 27 costs the same as from 51 to 58, and to jump straight from page 1 to 58, only the database itself needs to perform 57x the work (if that!), not the user, the network, nor the rest of the backend.
ExtensionThin635@reddit
I fail to see an issue unless you are coding a literal book, and even then assign an id to each page
vytah@reddit
You don't.
You can offer some anchor points to let people navigate faster. For example, if you're sorting by date, you can offer months. If by name, you can offer initial letters. If by type, you can list those types. If by amount, you can offer some preset amounts.
Of course sometimes you don't want to display those anchor points, maybe because the user wants to have something less intimidating than a date selector. Like a clickable continuous timeline.
Mastodont_XXX@reddit
Window functions?
https://use-the-index-luke.com/sql/partial-results/window-functions
ehaliewicz@reddit
Query for page 2 through 7 :)
CrunchyTortilla1234@reddit
so solution is to make shitty UI, ok
ehaliewicz@reddit
Give me an example of something you need to be able to click on an arbitrary page for that isn't searching or just picking a random item.
I'm not saying it never happens, but it's rare in my experience.
mccoyn@reddit
I've had to do this when looking for old emails. I don't know exactly what search terms I need and I don't know the date. So, I jump a few pages and look at the subjects of those emails. Was the project I am looking at before or after the stuff I see on this page? Then I jump a few more pages. Keep doing this until I narrow down the time frame that contains what I need to find. This is really a last resort thing. Normally, searching for keywords or dates works, but not allows.
CrunchyTortilla1234@reddit
an invoice. My bank account history. You know, the things that usualy have a lot of data behind it ?
ehaliewicz@reddit
You can still paginate with cursor based pagination, you just can't jump to a random page as efficiently as possible (neither can offset/limit, it still has to scan the extra data).
Generally when I'm scrolling through bank account history, or really anything with pages, I go page by page, rather than just jumping to an arbitrary page.
For most pagination, that is the case. With cursor based pagination, you're simply optimizing for the most common case.
Vlyn@reddit
Not the same guy and I generally agree with you, but in the case of bank statements the other guy is kinda right.
When I have 10 pages with results and today's date is on the first page.. and I want to look for a transaction I did roughly a month ago, then I might already know it's probably on page 3. Or maybe page 4, I just look at the date I land at.
Of course a good solution would be to filter on the date, but being able to jump around or step through page by page is a nice feature. And date filtering with the UI is usually a pain in the ass usability wise.
Endless scrolling would also work of course (+ filtering if it's really far in the past), it might put more strain on the bank servers though.
ehaliewicz@reddit
You still can jump to arbitrary pages with cursor based pagination, it's just less efficient.
sauland@reddit
What's so special about invoices that you magically just know that the invoice you're looking for is specifically on page 17?
CrunchyTortilla1234@reddit
I meant entries in the invoice, when I want to check whether it has everything I ordered for example
ehaliewicz@reddit
Page by page iteration is more efficient with cursor based pagination, it's just jumping to arbitrary pages that is worse.
sauland@reddit
How does being able to go to an arbitrary page help with that?
fredlllll@reddit
in what way is that better than just using offset? XD youre still ignoring all the previous output
ehaliewicz@reddit
It's a joke, but generally unless I just want to "pick a random item" I don't actually care about jumping to a random page, I'm usually searching.
fredlllll@reddit
well this might hold true for a search function. but what about listing your reddit comments/posts? or list your images on imgur that you uploaded over the years.
ehaliewicz@reddit
If I just want to browse through comments/posts I've made? Infinite scroll would be just as effective as pages. If I want to find a specific post, search would be better than going through pages or infinite scroll.
Again, not sure how pages do this any better than just scrolling.
awfulentrepreneur@reddit
You create a page table.
duh ;)
Wombarly@reddit
It doesn't only work for endless scrolling though, you also have the option to have regular old Previous and Next buttons.
Numnot299@reddit
I'm surprised no one has mentioned deferred joins yet. No where condition with col > ? needed. just leverage the index and jump to any page (offset value) you need. I implemented it at work. Works great deferred joins
raydeo@reddit
People talk about monotonically increasing ids, sortable collections etc and it’s good enough for showing some data in a website. The devil is in the details of how the ids are generated relative to the commits. If you actually want a syncable api of incremental changes since the previous sync over a mutable collection that doesn’t miss any changes your options are incredible limited. People forget that changes are not typically written at a serializable isolation level and ids and timestamps are consumed / generated at a different time than when they are written/committed to the db to be visible to the sync apis. Doing this without write races that create gaps at read time is way more complicated in a high frequency setting. You basically have to serialize writes such that the id is generated and written prior to the next transaction generating its id. This obviously doesn’t work well in a high frequency setting either. I think this is rarely done correctly. The write path has to be carefully coordinated against the read cursor so that they are consistent.
YumiYumiYumi@reddit
I've never really understood why OFFSET has to be slow, assuming a tree index can fully cover the WHERE and ORDER BY clauses.
If the index has row counts at each node in the tree, determining the row at an offset should be O(log n). Insert/update/delete should also be O(log n) [need to update the count at each parent node].
EluxTruxl@reddit
You can do this but it leads to some performance tradeoffs that aren't generally worth it. Firstly, you need to propogate the counts to the root of the b tree, which requires extra writes to disk. And secondly, since everything needs to update the root there is a lot of extra contention.
YumiYumiYumi@reddit
Thanks for the info.
I can see that this does add some overhead, though I think a portion of it could be mitigated (e.g. only flush counts to disk periodically).
Regardless, I can definitely see cases where it'd be worth it, such as read heavy, low write loads. The DBMS could just make it a separate index type (e.g. an "offset index") and let the user decide whether the extra write load is worth it.
I find it odd that no-one does this though.
vbilopav89@reddit
How about insert temporary table wuth sorted and filtered data and then do the limit/offset from that temp.
Fiennes@reddit
That temporary table takes resources. Sure for a couple of users and not much data this would work. Then scale that to just hundreds of users with their own sort criteria and you're dead in the water.
Dunge@reddit
Isn't the answer to that using cursor? I never used it, opened the article to find information on how to do it properly, came back with no solution.
pheonixblade9@reddit
cursors are inherently stateful, create locks and can use a lot of memory, and aren't really a good fit for modern applications. they do have their place in something like an ETL process with frozen datasets perhaps, but not really appropriate for interactive applications.
you're better off taking the memory/disk hit and using indexes that precompute pagination if possible, but just keep in mind that adding indexes increases insert time (generally linearly), too.
cant-find-user-name@reddit
the cursor they are talking about is probably cursor in cursor based pagination, also called keyset pagination by some. They aren't talking about sql cursors.
pheonixblade9@reddit
fair enough, but that does run into issues if you don't properly design your IDs/order bys.
cant-find-user-name@reddit
Yes, it is a very complex pattern. I implemented it in a previous company because we were using graphal and graphql recommends using keyset pagination, and it was very difficult to do so. I am still not very comfortable with it.
pheonixblade9@reddit
I'm aware, I used a similar pattern designing APIs at Google, we just called it something different ;)
delThaphunkyTaco@reddit
Odd never had a problem
Alarmed-Moose7150@reddit
Odd that you've never had a problem, it's excessively common if your paginated data ordering can change at all.
delThaphunkyTaco@reddit
like live data?
robberviet@reddit
Page pagination is common because it's simple and intuitive. It's not like we are dumb and not aware of cursor pagination.
bighi@reddit
People should stop misusing the “considered harmful” phrasing when they just mean “I don’t like using it”.
Also, people should stop posting about it. If you don’t like using something, why should anyone else care about it? I don’t like onions, and I’m not posting about it.
Skithiryx@reddit
It’s considered harmful because it can really tax your database cpu and make later pages take progressively longer and longer to fetch.
bighi@reddit
Yes, but look at your sentence. “Can”. Saying something CAN be harmful is different from saying it IS harmful. The difference in meaning might be misinterpreted.
Like if you say that salt in food is harmful. Someone reading might think they should avoid every salt from now on. But you need salt to live, and without salt you die.
Of course nobody dies by using database features. But there are already lots of junior devs (and even some more experienced) with lots of irrational dogmas created by “considered harmful” articles.
I had discussions in code reviews because some junior dev insisted that using map was considered harmful, and many other situations like that. People taking an argument that is valid like “this thing in this context is usually bad” and turning it into “this thing is ALWAYS bad”. And these clickbaity articles that strip away all nuance only contribute to that.
editor_of_the_beast@reddit
Raise your hand if you’ve ever had someone tell you that pagination is a solved problem👋👋👋👋
gelatinousgamer@reddit
As a user, I really don't like cursor-based pagination. It's so convenient to be able to edit a page number in a URL and acually predict where you'll end up.
Cursor-based pagination can also lead to really funky behavior if the ordering of the cursor element changes while browsing. Look no further than right here on Reddit.
Trygle@reddit
Here I thought this was about guitars...
misuo@reddit
You should be able to repaginate even if all “original” data are identical, right? But in that case at least one column must have unique values, if not there originally, then added.