-
• #18702
Might as well move this old thing there too
-
• #18703
All moved
-
• #18704
I also can't program (unless you count smashing bits of VBA together) but it looks like there are three sequential parts to the query - could you add a condition to the second two to ignore any records found in the preceding part(s)?
-
• #18705
The query optimiser within PostgeSQL will execute them in parallel before merging the results within the UNION.
At the time each subquery runs, the result from the others is not known.
The UNION that discards the duplicates can only do so within each type of update "someone replied", "new thread in forum", "you've been mentioned"... so multiple mentions of you in a comment are removed, etc.
If I try and run each query in a specific order, perhaps by structuring the queries as preceding WITH queries, then I could use the output of those to feed into the next queries to remove the duplicates.... however...
The current query produces a single result set of all types of updates blended together as a single conceptual table, and this makes pagination of updates work well... I can change the LIMIT and OFFSET in one place and get to any page of updates.
But if I split the subqueries into separate queries then each of these need to be paginated separately, and this means search results suddenly need complex state to be tracked, to know the offset per query per search... alternatively I could materialise every visit to the updates page as a full table so that they could be paginated over.
Both of those approaches are insane... so the only real option is what we have already.
So if the query is fine... what can be done?
The only thing I see is to remove duplicates in the API / front-end after the query is run. But this will mean some pages won't actually have 25 updates on them.
-
• #18706
Or make it easier by presenting the three sets of items separately, rather than as a single Following page.
-
• #18707
I also can't program
Understanding how databases behave is a whole separate discipline to programming. A large number of programmers are largely or entirely ignorant of the subject. A depressing number of them have a dismissive attitude even to SQL, which reveals them to be a lot dumber than they realise. The good ones have a reasonable grasp, but it's a specialist subject and in larger enterprises often an entirely distinct discipline.
-
• #18708
Maybe just whack a
distinct on
on the big union? (and update the order by clause appropriately)SELECT DISTINCT ON (created, item_type_id, item_id), COUNT(*) OVER() AS total, ...
I haven't grokked the whole query so I could be way off.
-
• #18709
Oooh... I forgot all about
SELECT DISTINCT ON
despite having used it once in the past.I think that would work.
-
• #18710
Oooh... I forgot all about SELECT DISTINCT ON despite having used it once in the past.
Did you forget or hit performance issues and drop the idea? DISTINCT can put a significant load on query execution. People often end up trying to use a combination of careful database design and GROUP BY to get better speed while still eliminating or minimising duplicates. In my experience, DISTINCT is most often useful with simpler, smaller datasets.
Proof is in the testing, ofc.
-
• #18711
What version of postgres is lfgss on? Pre 9.6 wasn't multithreaded, and pre 12, CTEs (and sub queries?) were basically optimisation fences.
If a DISTINCT ON causes performance issues, upgrading postgres could be a solution
-
• #18712
At this point in time I may have to admit it's an old PostgreSQL version.
If anyone is extremely comfortable with upgrading Postgres please let me know, would love to get us to latest stable.
We are on 9.2 with the ltree module installed.
-
• #18713
fwiw, I decided to make an LFGSS android app a while back. Unfortunately life got in the way and I never really made much progress, but in the process, I did containerise Microcosm, and I can tell you now that it builds and boots up just as easily with 9.2 as it does with 12.
I've upgraded Postgres versions a few times and it's usually not much more hassle than doing a dump and restore. Last time I upgraded from 9.3 to 10, the syntax checker for stored procedures was a bit stricter and forced us to correct some errors no one had noticed before, but other than that it was pretty painless.
-
• #18714
Cheers boss man, amazingly fast nuking
-
• #18715
Pure luck that I'm conscious at this hour
-
• #18716
Poor person, that happened faster than their registration and listing the ad!
-
• #18717
Would it be Black logo time as Yankee Shitbag aka Micheal Stromberg has passed away.
-
• #18719
Is there a Rider Down thread already? There needs to be one. It doesn't have to be a road traffic incident, just needs a thread with citation / link so that once I send the logo black people can see why.
-
• #18720
Sorry didn't know the etiquette. Is that the right word, or even if I should.
-
• #18721
It's cool.
When we go black logo people look in the Rider Down forum... hence I need a thread to exist before I switch the logo so that people can figure out why the logo has changed.
-
• #18722
As He/She is on a warning about shit posting is there anyway to stop them continually changing their username?
1 Attachment
-
• #18723
is there anyway to stop them
Is there any point in trying? Velocio is a robot, he deals only in numbers, in this case 55649. It makes no difference what human-readable label is attached.
-
• #18724
Hmm... I think you'll find the the Boss is as as fragile as anyone else
-
• #18725
Is Deckard a replicant?
A few more potential for moving.
https://www.lfgss.com/conversations/344467/#comment15347998
https://www.lfgss.com/conversations/312234/?offset=475#comment15242196
https://www.lfgss.com/conversations/241554/?offset=175#comment15165679
https://www.lfgss.com/conversations/333920/?offset=400#comment15164827
https://www.lfgss.com/conversations/321918/?offset=100#comment15135496