I think Using bucket sort we can solve this problem in O(N) time complexity. Where we need to take the frequency of occurrences as index and append the corresponding numbers to that index.
Min Heap i.e (Priority queue) also works on sorting, so using Min Heap the time complexity is still gonna be O(nlogn).
Got it. Heaps are the solution to everything
We can also use quickselect to solve this problem. Although in worst case the time complexity is O(n^2) and best case of O(n) when exactly half the elements get partitioned into left and right half at every iteration. With randomly choosing a pivot at every step the algorithm seems to work pretty well for these kind of problems (choosing kth element or top k frequent elements)
Using bucket sort is also a solution. Use bucket sort but take instances of numbers as the key and occurences as the value.
This heap solution's time and space complexity are O(nlogk) and O(k). The most optimal solution for this in terms of time complexity is bucket sort solution whose time and space complexity are both O(n).
Oh my god the way I got this question as part of my meta interview last year and I used min heap, yet the interviewer seemed confused on why I used it. Didn’t make it to next round :(
Currently studying for a meta interview. They told me it was going to be two problems with 45 minutes total. There’s NO chance I could do two problems like this in 45 minutes. I can solve them. Sure. Not multiple problems of this difficulty on so little time. Interviews at Google and Amazon are a single problem in about 45 minutes. I’ve passed the Google tech screen and the 6 hour Google onsite. I don’t think I’ve worked with a heap in about 5 years. Maybe 6. Most companies don’t use them frequently. I’m certain I’ve only ever touched a heap whole preparing for interviews (and doing an implementation and solving problems). The preparation material seems WAY easier than Google’s or Amazon’s. But, Meta also doesn’t include heaps in the prep material. I see: arrays, strings, hash tables, search, sorting, recursion, stacks. From what I recall Google and Amazon both had the same list of 28 separate things to be prepared to be tested on.
Hash map -> stack with max k elements with min freq element at top
1 idea sort array, nlogn 2 idea hashmap + top k storing something like sorted array of k elements, its n*k 3 idea is hashmap + heap, heap can store top k for log k so its n log k
Since problem allows returning top k frequent elements in any order, better solution would be to use quickselect after hashmap step, then total complexity would stay linear on average Alternatively, you can build a second hashmap from frequency to list of numbers with given frequency and probe it with frequencies in range n..1, it will also give you O(n) solution for the second part.
Your calculation is wrong. Min/Max heap insertion: average time complexity: O(log n), do it for worst case (all unique numbers): O(n log n). Which might be a little faster than sorting. In addition if k=n (worst case) then it takes O(k log n) to pop elements. You got: O(n (log n + log k)) instead of O(nlogn). What am I missing?
Quick select algorithm in O(n), will also work
In C++ Map is already sorted, you can change the compare function to be greater than and just do the 2nd portion
Optimal solution would be to use bucket sort
❤ unbelievable 😮
also, you can solve this in O(n) time with a bucket sort, it just costs extra memory, tradeoffs
Why not just iterate the hashmap and store the top two frequencies as you traverse? O(n) for building the hashmap + O(n) for traversing the hashmap
Why not just use hashmap? Key as input array element and frequency as value. Go through array and increase frequency. Then go through hashmap and print out the key whenever frequency is 2. It’s linear in time and space.
@GregHogg