@nickchapsas

For those of you screaming about adding an index, first an index isn't free and second, even if you add an index, this request takes 232ms. I'm not a maths person but 232 feels more than 2ms.

@fedayka

This is a very handy package, but I would argue that it is not really improving database performance; it is just avoiding hitting it every time.

@lucasmicheleto2722

Title is kinda wrong, but the package seems awesome

@Arcadenut1

Someone should submit this to Code Cop for review...

@TheSilent333

Aside from just the client speed gain, this can also reduce backend load significantly.  I can think of a dozen projects at work where this will come in handy already.

Thanks!!!

@andreistelian9058

I think it is a very good package, but I hope that the maintainer will do a implementation also for PostgreSQL. Nice video, Nick!

@serverlesssolutionsllc8273

ETags are a great tool - but that timestamp field does more than enable deltas. It allows for optimistic concurrency for writes to the database, which can be super useful as well!

@silentdebugger

This seems neat but also, as other people have said, perhaps limited usefulness because the first client load is still slow and it doesn't improve much of the backend if you have a lot of clients. It would be nice to see some kind of memcached that was aware of rowversion so the cache would be distributed to all users querying the same data.

@rik2243

As someone coming from a DB background, I don't see how this is possible. If you use proper indexes and well written Stored Procedures (you can't get faster than a SP as its at a DB level), this will just be some kind of caching at the UI end, but many tables will be updated by other processes, so each time it will need to check if the table has been updated, which means it will be slow or the data might be invalid.  
I would just suggest you do things properly at a DB level, then this wouldn't be needed.
You can always load common data or all data, if size permits, into memory, if you need very quick response times.
I guess if you don't want to optimize things a DB level, then use this, but I would suggest that's a bad idea.

@pylvr8021

What happen when you have a query that joins two tables . And table 1 didnt change but table 2 did

@iandrake4683

I've built stuff like this manually.  This will save a ton of time.

@MahmoudBakkar

Agree with most of the comments, it's a handy package. But I'd rather say it's a fix to the API performance.

N/A

It's always Simon Cropp. One of three 'Permanent Patrons' on the Fody project here. It's always Simon Cropp.

@kayhantolga

Nick's channel one of the rare channels I haven't block yet because of funny face thumbnails.

@Lothy49

I mean, I guess it fixes an issue by thinning out the herd so to speak. Is there an index on the rowversion column in your database though so that it can efficiently retrieve max(rowversion)? Otherwise you'd be doing a table scan to determine that max aggregate.

Anyway, the idea itself is kinda cool. Transparently adding ETags and so on. I just think that a less-than-aware developer would take this fix and run with it, and eventually they'd have two problems to fix instead of just one problem.

@lyudmilpetrov79

Thank you Nick as always great content and all the best

@trojakm

Now this is actually very cool, even if limited to specific scenarios. Still, ingenuously simple.

@BasuraRatnayake

Wow a fantastic and easy to use library, thanks for sharing Nick

@urbanelemental3308

OMG been waiting for this forever.  <3

@xopabyteh

I am seeing a lot of criticism in the comments and although 99% valid, i think that this is a great caching approach and is is super easy to use, especially for smaller applications with FEs like blazor. Thx Nick