I code as a hobby, and for a living 👨‍💻

Creator of Leomard App 🐱

Join the !leomard@lemm.ee!

  • 1 Post
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle


  • First method does not store the number itself anywhere. Let’s assume that you store apples. I come and ask you “How many apples do you have?”. To answer, you go and count every single apple one by one and return me the number. It’s very easy if you have a small number of apples, but if you have, let’s say, 5000 apples - you can see how long it may take.

    Second option is you keeping a track of how many apples you have in stock by having it written down somewhere. If I ask you “How many apples do you have?” you just pull out your notepad and tell me the number. If you give me an apple, you just adjust the number you have written down already.


  • Getting the total number of all comments may be very resource heavy if there is a lot of comments.

    If it’s just 5 comments, then the computer can quickly get them all from database and count how many of them are there. Now imagine if there is 50 000 comments and suddenly, you me and entire website ask “how many comments are there for this post?”

    Suddenly the computer is overwhelmed by the request and you may end up crashing it due to amount of tasks it has to do.

    It’s way faster if instead of all of that, the computer kept track of a number of all comments and simply adjust it when comment is added or removed. It does not have to get all the comments and count how many are there, just simply return the number and you are done.

    But in the essence, you sacriface potential accuracy for speed. You may accidentally “desynchronize” the counter - if an user requests a removal of the same comment twice, and you don’t check if that comment was not removed. Or, in theory, if two separate users add or remove a comment at the same time. This is called “race condition”, which is common in multi-threaded computing.


  • Ok, so basically, there is multiple ways one could comment count. The most obvious option is to count the actual number of comments under the post. This might be in practice slow, as you must load all comments under the post. An alternative approach is to have a count variable for post, which is increased or decreased by 1 if post is added/removed. It’s way faster to retrieve that variable, instead of getting all comments and counting the number of them. The problem starts if some anomaly happens that is not accounted for, so for example, if I request the same comment to be deleted multiple times. So that counter can be decreased more than once for the same comment. This could be fixed pretty easily:

    if comment_to_delete is deleted {
    	// Do not do anything
    	return
    }
    
    post.comment_count -= 1
    delete_comment(comment_to_delete)
    

    And yeah, I thought so too, but ever since I stumbled upon this bug, I think the way the comment count is stored is through the counter variable.



  • As an author of one Lemmy front-end, I can confirm that you are potentially sharing your username and password. Unfortunately, there is no way for Lemmy front-end developers to, say, open a web socket to Lemmy instance and have you login through a web browser (which would be much prefered from security standpoint, but it is what it is).

    Furthermore, from what I see, many of such front-ends store your password, instead of just the Bearer token. Unfortunately, from what I get, there is also no way of invalidating the Bearer tokens right now, so in the event of it getting stolen - you’re f***ed.

    Now, couple of tips:

    • USE 2FA AUTHENTICATION. In the event of malicious app actually stealing your credentials, you are at least a little bit more protected by this layer.
    • Use password manager - do not use your banking password, please.
    • Only use trusted front-ends, and in the even of an app, only download versions from official sources maintained by the app author.
    • Make sure the instance you’re registered at has a valid HTTPS certificate.