• 0 Posts
  • 458 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle


  • Random programming certificates are generally worthless. The course to get them might teach you a lot and be worth while, but the certificate at the end is worthless. If it is free then it does not matter too much either way, might be a good way to test yourself. But I would not rely on it to get you a job at all. For that you need other ways to prove you can do the job - typically with the ability to talk about stuff and having written some real world like application. Which a course might help you do to.




  • Never said it had to be a text file. There are many binary serialization formats that could be used. But is a lot of situations the overhead you save is not worth the debugging effort of working with binary data. For something like this that is likely not going to be more then a GB or so, probably much less it really does not matter that much if you use binary or text formats. This is an export format that will likely just have one batch processing layer on. This type of thing is generally easiest for more people to work with in a plain text format. If you really need efficient querying of the data then it is trivial and quick to load it into a DB of your choice rather then being stuck with sqlite.


  • export tracking data to analyze later on

    That is essentially log data or essentially equivalent. Log data does not have to be human readable, it is just a series of events that happen over time. Most log data, even what you would think of as traditional messages from a program, is not parsed by humans manually but analyzed by code later on. It is really not that hard to slow to process log data line by line. I have done this with TB of data before which does require a lot more effort to do. A simple file like this would take seconds to process at most, even if you were not very efficient about it. I also never said it needed to be stored as text, just a simple file is enough - no need for a full database. That file could be binary if you really need it to be but text serialization would also be good enough. Most of the web world is processed via text serialization.

    The biggest problem with yaml like in OP is the need to decode the whole file at once since it is a single list. Line by line processing would be a lot easier to work with. But even then if it is only a few 100 MBs loading it all in memory once and analyzing it all in memory would not take long at all - it just does not scale very well.





  • It is not a full replacement - but they are aiming for

    Our current target is to build a drop-in replacement for all common use cases of sudo.

    They are dropping support for some of the older/niche features/settings and now ignore some of the config you used to be able to do.

    Some parts of the original sudo are explicitly not in scope. Sudo has a large and rich history and some of the features available in the original sudo implementation are largely unused or only available for legacy platforms. In order to determine which features make it we both consider whether the feature is relevant for modern systems, and whether it will receive at very least decent usage. Finally, of course, a feature should not compromise the safety of the whole program.

    But generally for all common use cases it should just be a drop in replacement.