I wonder if oauth support will begin to change the common pattern where a database has a single user which is used by a web application which implements its own user system. If postgres supports the same auth tokens the web app is already using then perhaps it makes sense for database operations to happen as that user and to use the database system of roles and row-level access controls instead of implementing them in the application later?
It would be a major change in mindset for web people, but it would also prevent a lot of reinvention of wheels (and probably a fair number of security blunders when the wheel doesn't quite work right).
No because you will lose connection pooling and each connection has a lot of overhead (open/close, separate process, memory overhead, cpu context switching, etc)
Just yesterday there was a discussion here how most services never actually have more than ~500 users…
We didn’t talk about the breakdown of the users, but I wouldn’t be surprised if 10% of your users are 10x more active than the rest… so you can just have connections in the pools for those heavy users and not worry about the fact that connections for the 90% of infrequent users are a bit slower.
Index skip-scan is by far the feature I'm most excited about here. Async IO is very useful, but being able to get rid of a bunch of extra indexes (or manually rolled skip-scan SQL) will be huge from a DBA perspective.
And it'll also be better for people new to postgres because they can index in a way that "feels sensible" and not have performance drop off a cliff. Before there was a lot of headscratching of "why does it matter which way round the columns are, can't postgres figure this out for me?".
Sorry to disappoint you, but it will still matter which way round the columns are. Column order determines index structure and that is important for performance. Skip scan will not magically make that disappear.
Well, I do love PostgreSQL it picks up where Haskell leaves off, as it were. But I don't need the latest until I do. I use nix so I (perhaps naively) am not worried about upgrading.
Why? Made sense to me... UUIDv7 ensures that each new generated ID is "larger" than all IDs generated before. But still random on the right part.
Think about numbers where first part is time and last digits are random.
The nice thing is when you insert them to index (tree) they always fit nicely at the end. So you don't insert "in the middle" of the tree, which is not optimal.
They talk about UUID versions as if they're incremental improvements, when in reality the version only affects generation and semantics. It also sounds like explicit support for UUIDv7 storage was needed, which is not true.
Ot didn’t make sense. They mentioned an overhaul but didn’t say how to convert the UUIDs to timestamps. And also included a DDL with an index created over a primary key with no explanation. No indication of what the “overhaul” was actually about
CVisionIsMyJam@reddit
io_uring and oauth 2.0 support seem pretty slick
Conscious-Ball8373@reddit
I wonder if oauth support will begin to change the common pattern where a database has a single user which is used by a web application which implements its own user system. If postgres supports the same auth tokens the web app is already using then perhaps it makes sense for database operations to happen as that user and to use the database system of roles and row-level access controls instead of implementing them in the application later?
It would be a major change in mindset for web people, but it would also prevent a lot of reinvention of wheels (and probably a fair number of security blunders when the wheel doesn't quite work right).
riksi@reddit
No because you will lose connection pooling and each connection has a lot of overhead (open/close, separate process, memory overhead, cpu context switching, etc)
ArtOfWarfare@reddit
Just yesterday there was a discussion here how most services never actually have more than ~500 users…
We didn’t talk about the breakdown of the users, but I wouldn’t be surprised if 10% of your users are 10x more active than the rest… so you can just have connections in the pools for those heavy users and not worry about the fact that connections for the 90% of infrequent users are a bit slower.
belkh@reddit
Clearly we need stateless access for postgres 19 then
hpxvzhjfgb@reddit
wake me up when we get unsigned integers
ants_a@reddit
Wake up
hpxvzhjfgb@reddit
I know about that
BlackenedGem@reddit
Index skip-scan is by far the feature I'm most excited about here. Async IO is very useful, but being able to get rid of a bunch of extra indexes (or manually rolled skip-scan SQL) will be huge from a DBA perspective.
And it'll also be better for people new to postgres because they can index in a way that "feels sensible" and not have performance drop off a cliff. Before there was a lot of headscratching of "why does it matter which way round the columns are, can't postgres figure this out for me?".
ants_a@reddit
Sorry to disappoint you, but it will still matter which way round the columns are. Column order determines index structure and that is important for performance. Skip scan will not magically make that disappear.
BlackenedGem@reddit
Obviously
shogun77777777@reddit
OMG I’M SO EXCITED!!!!!!!!!!!!!!!!!!!!!!!!!
frostbaka@reddit
Upgrade Postgres, get excited for next Postgres...
deanrihpee@reddit
when you liked a piece of tech too much
I'm guilty of this too lol
frostbaka@reddit
We are finishing upgrade of a 3.5T 18 node cluster which spans 2 datacenters to postgres 16 and its already outdated.
mlitchard@reddit
Is it doing the job? Then it’s not outdated.
frostbaka@reddit
Yes but the article says get excited...
mlitchard@reddit
Well, I do love PostgreSQL it picks up where Haskell leaves off, as it were. But I don't need the latest until I do. I use nix so I (perhaps naively) am not worried about upgrading.
psaux_grep@reddit
Upgrading big databases are still painful.
mlitchard@reddit
I feel that
Dragon_yum@reddit
Oh boy oh boy a new version of a database!
INeedAnAwesomeName@reddit
yea like the fuck do u want me to do
dontquestionmyaction@reddit
Maybe it's time to actually learn SQL and use it? The DB is your friend
grauenwolf@reddit
We're already doing that.
Pheasn@reddit
That section on UUIDs read like complete nonsense
VirtualMage@reddit
Why? Made sense to me... UUIDv7 ensures that each new generated ID is "larger" than all IDs generated before. But still random on the right part.
Think about numbers where first part is time and last digits are random.
The nice thing is when you insert them to index (tree) they always fit nicely at the end. So you don't insert "in the middle" of the tree, which is not optimal.
Pheasn@reddit
They talk about UUID versions as if they're incremental improvements, when in reality the version only affects generation and semantics. It also sounds like explicit support for UUIDv7 storage was needed, which is not true.
Linguistic-mystic@reddit
Ot didn’t make sense. They mentioned an overhaul but didn’t say how to convert the UUIDs to timestamps. And also included a DDL with an index created over a primary key with no explanation. No indication of what the “overhaul” was actually about
danted002@reddit
They mention that uuid7 has the first part encoded as a the timestamp which increases locality.
CrackerJackKittyCat@reddit
Exactly. Sortability makes the btrees more compact, fewer rebalances.
And both application and db-side logic can extract the timestamp component as meaningful, if they dare.
TomWithTime@reddit
So is it like a combination of an xid and a uuidv4? V4 format but with some section of it computed from time?
olsner@reddit
First time I’ve seen hexadecimal (or presumably binary rather than having any actual hex digits in storage) described as ”compressed decimal” 😅
A-Grey-World@reddit
Ooo there's a new version of UUID! Exciting, I missed that.
PabloZissou@reddit
Great!
raphired@reddit
Native temporal tables? No? Zzzzzzz.
timangus@reddit
Do I have to?
vision0709@reddit
No