@jean-naymar602

13:20 Is this overhead inherently caused by the syscall approach ?
I mean, if you go the "shared memory" route, you'd have to essentially re-implement the synchronizing behaviour of the kernel's messaging implementation anyway.
Would there still be a significant increase in performance even after doing that ?

@MarkDavid712

12:25 - You have no idea how much I needed this.

I mean, I do understand—the server and client are essentially processes on their respective machines. I've even written simple programs and worked with interconnected systems, so you'd think I would have an intuitive grasp of it.

But the way it’s often represented, with the client and server depicted as entire machines, really stuck with me, and I couldn’t separate them in my mind. This explanation feels like a breakthrough. I think it’s going to help me develop a much stronger intuition about how OSes and IPC work. Thank you.

@joaopedrorocha5693

Thanks! Was stuck trying to understand these concepts and how they play with each other!

Your videos are great on how they mix the concepts with just enough implementation details so we can grasp the concepts and how they are concretized.

@SusilRamarao

Thanks mate for the clear explaination!

@SphereofTime

2:34 IPC; shared memory or message passing

10:28 OS Returns it, as a return value of functions 

13:26 port resids in kernel’s address space 


.

@yanglijian

When Core Dumped post a new video, I know my brain will start to grow up again.

@gurupartapkhalsa6565

shared memory is not necessarily faster as it depends on mutex locking, so the throughput of the memory space is a big factor if you're actually searching for a bottleneck and not just being pedantic. smem is a traditional approach for communicating with a trusted OS, and this is because most trusted OSes are single threaded by design, with access controls per function/object which are routed through a trusted kernel, and the trusted kernel handles most memory mapping and cleaning semantics. As a simple rule of thumb, the more asynchronous the two processes are, the less they should use shared memory.

@RadheShyam33455

The god of concepts this guy❤❤❤I will sell my kidney to learn from you

@scottydoo2

I know the OSI model by heart, etc. but I have never understood ports and sockets. I’ve memorized port numbers, I forward ports to deal with NAT, I make sure ports are secure, but I’ve never understood what they are. 
Thanks for explaining them. They’re no longer only stupid numbers to me.

@nullpointer1284

Would love a Mach version of this! I know the Unix process model is so ubiquitous, but Mach's concept of Task, Thread, and Virtual Memory that processes can be composed of is super cool IMO.

@esra_erimez

I'm so glad I discovered your channel, it is a great refresher for the stuff I learned in university. (By the way, I love JetBrains tooling)

@CheeseBananas

OS thread scheduling plays a role on IPC. An advantage of messaging on top of shared memory is that the OS thread scheduler knows that it needs to schedule the message receiving thread next.

@flaviohenrique5294

Just commented so more people can receive this video. It's the best explanation ever.

@dixztube

Man this is so incredibly good how you explain these things. It makes it so clear

@bouzaidaaymen

GREAT ! hands up if u finally understood where the port numbers's logic came from :face-turquoise-covering-eyes:

@alexanderbikk8055

Perfect as always :)
I would ask for a separate video for Sockets

@Drudge.Miller

hi jorge, my name is drudge, and this, is core content

@Cadet1249

Literally today in class we started talking about socket programming and IPC lol

@Abe41194

Dude. Your content is unmatched.

@mr_rabbit-g2q

loved the video! im studying about operating system in my college semester right now! Watching your videos has been really great!