• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 27th, 2023

help-circle
  • I think the person who thought I was an AI explained it quite well. Thet said they just got jaded. However, they believed me when I told who I was and apologised. I appreciate it when people are able to revise their idea and it shows they did not have bad intentions.

    I would not say people are crazy, there is a lot of manipulation going on on the internet by businesses and some governments. I think a lot of people fall for bots all the time. For example, Twitter and Reddit is full of them. So, I do not think it is that weird that people sometimes are not sure whether they are talking to AI.

    What happened to you when you even showed pictures of yourself and they still were convinced you were AI is quite extreme. I hope that that does not happen too often, because that seems like the other person is either a troll or paranoid.


  • Thank you, I appreciate that very much. I try to be accepting of other ideas and to be understanding. But sometimes it is difficult for me too. Especially if I get many negative reactions and I do not completely understand why (I do not mean you, but some of the other people that responded to me). Then I get defensive as well, even though I try not to be.

    Your work sounds nice and very useful! As a researcher, I know a lot about a very small set of subjects. Sometimes, I am wondering whether I am actually contributing enough and whether what I am doing is actually useful. When you are building homes, at least it is very clear who you are helping and how they benefit from it. I would not be able to do it. I have two left hands, as we say in my language. I am not good with the practical stuff, I am only good with theory.

    In any case, thank you for the discussion. I checked the gut microbiome out a little bit already and there is a lot of scientific work on it. Very complex and very interesting! I am looking forward to delving into that. I hope you have a nice day (or evening depending on the time where you are).


  • I am not an AI. I am not sure how to prove that, but I am not. I am a scientific researcher, but in another field than the medical field. Maybe my scientific background shows in the way I communicate? Also, English is not my native language, so that might be why I sound different as well.

    The reason I checked out so much research on obesity (as well as on being underweight) is that many of my family members suffer from eating disorders. I lost my little sister to anorexia a couple of years ago and my mother had it. However, some of my family members are obese as well, also due to eating disorders. I think trying to understand why people eat in a certain way and to help them instead of just judging them, might change things. And for me, scientific work and data is the best way to understand things. Maybe that gives you a bit of understanding where I am coming from and why I am interested in this subject.

    If something is the result of research, it cannot just be called bullshit and set aside. It is not just another opinion that you can just decide to disagree with, considering the care that usually has been taken to reduce bias and ensure validity. Of course, research can be wrong and it is important to have a scientific debate. However, such a debate should be based on clear reasoning and arguments and other research results.

    I was not pitying you. I was being compassionate. There is a difference between the two. I tried to be kind and understanding. That’s all.

    Edit: I also wanted to mention that the study I linked refers to a study on women who were pregnant during the famine in WWII in the Netherlands. Maybe that is what you meant.


  • Thanks for the name. I will check out Rhonda Patrick and see what research I can find on the topic. I thought you were calling the different theories bullshit, but maybe I misunderstood you and you only meant to say that they sound like that. If that is the case, I apologize. I got so much negativity just for mentioning the research that I might have responded too harshly.

    I am sorry to hear that you are struggling with weight so much. I think obesity has to do with eating habits. However, there is a reason for why you have this eating habits. One reason for that could be gut microbiome.

    What often happens is that people just get angry with themselves for eating too much. And that anger might help in the short term to force yourself to eat less, but in the long term it will not work and it will just make you feel bad about yourself. However, if you look at the actual underlying causes, such as gut microbiome or setpoint theory, this might provide the insight needed for long term weight loss without the extent of suffering that most obese people have to endure.

    It is the only study I know about this. I checked it out, because I have a lot of people with anorexia in my family as well as some people with eating disorders causing obesity. I thought maybe being anorexic and pregnant is similar for your body as being in famine and pregnant. So, that is why I know about this study.


  • That something sounds like bullshit does not mean that it is bullshit. I mean, I believe we should look at the data and the research. I did hear something about the role of gut bacteria but it was more about issues like depression. Might be interesting to check out further. Thank you.

    I am not saying people should not fight their cravings. But the cravings of someone who is obese might be very different from someone who has a normal weight. Like I said, if you get below the setpoint often appetite will go up. Considering that most obese people are not able to lose significant weight in the long term, these cravings seem to be too strong and it seems to make people unable to “just eat less”. So, we need a solution for that.

    I am not sure whether this is what you are referring to, but I know about this study that says that prenatal exposure to famine in early gestation increases the risk of obesity.



  • No being obese is not healthy. It is clearly associated with many health risks. I have no idea why you would infer that I would think it is healthy from what I have said. Obesity is clearly a problem. However, to solve it, I think we should look at the mechanisms behind it and try to understand it. So, that is what I am trying to do.

    Saying that something is “just fat people bullshit” is also not a good argument. Maybe we can leave the emotions and especially the anger out of it and just look at the research. You seem angry and I have no idea what I have done to you to make you angry. I just tried to discuss some research on this subject.


  • No, it is not junk science. Research about it is published in many serious scientific journals. Just check out Scopus or something. You cannot say that it is junk science just because you do not like the results.

    You also seem to not understand it. It does not say that you can escape the law of physics. It also does not say that in my explanation. It says that you energy expenditure goes down if you get below the setpoint. So, eating less becomes less effective. At the same time, you appetite will go up. This makes it very difficult to maintain the weight loss and this is why many people fail to keep the weight off in the long term.

    Criticism of any research is possible, of course. However, just saying it is junk and misrepresenting what the theory actually says are not good arguments.

    If you disagree, then what is your explanation of why most obese people tend to not keep more than 10% weight off over time without medication or surgery? What scientific evidence is there for that? I would be very interested in hearing about alternative research on this topic.


  • When you are overweight, it is not a case of just eating less. Eating less has very different physical and psychological effects for someone who is overweight than for someone who is not.

    If you are interested in learning something about this, you can check out the setpoint theory of body weight. In short, the body has a setpoint for which weight it should be. If you are overweight, this setpoint is at a higher weight than if you are not. If your weight gets below the setpoint, your metabolism will slow down and your appetite will go up and the body starts to try and do everything to go back to this higher weight. That is why most people are not able to lose more than 10% of their weight in the long term. Often, when they gain the weight back, they gain back even more than they lost and the setpoint might even go up further. It is a neverending struggle for most people. Medication like Ozempic affect this mechanism so it becomes possible to lose weight.

    If you want, you can find a lot of scientific papers about this. There is quite a lot of research on this and the setpoint theory is well accepted within the medical field specialised in dealing with weight problems, I believe.

    In addition, Ozempic is not only a fat loss medicine. It is also used by people with diabetes to lower their glucose.



  • I actually agree with this. This technology should be open. I know that there are arguments to keep it closed, like it could be misused, etc. However, I think that all the scary stories about AI are also a way to keep attention away from the fact that if you have a monopoly on it, you have enormous power. This power will grow when the tech is used more and more. If all this power is in the hands of a commercial business (even though they say they aren’t), then you know AI is going to be misused to gain money. We do not have clear insight in what they are doing and we have no reason to trust them.

    You also know that bad actors, like dictatorial governments will eventually get or develop the technology themselves. So, keeping it closed is not a good way to protect it from that happening. At the same time, you are also keeping it from researchers who could investigate how to use and develop it further to be used responsibly and to the benefit of humanity.

    Also, they relied on data generated by people in society who never got any payment or anything for that. So, it is immoral to not share the results with that same people in society openly and instead keeping it closed. I know they used some of my papers. However, I am not allowed to study their model. Seems unfair.

    The dangers of AI should be kept at bay using regulation and enforcement by democratically chosen governments, not by commercial businesses or other non-democratic organisations.


  • I agree we need a definition. But there always has been disagreement about what definition should be used (as is the case with almost anything in most fields of science). There traditionally have been four types of definitions of (artificial) intelligence, if I remember correctly they are: thinking like a human, thinking rationally, behaving like a human, behaving rationally. I remember having to write an essay for my studies about it and ending it with saying that we should not aim to create AI that thinks like a human, because there are more fun ways to create new humans. ;-)

    I think the new LLMs will pass most forms of the Turing test and are thus able to behave like a human. According to Turing, we should therefore assume that they are conscious, as we do the same for humans, based on their behaviour. And I think he has a point from a rational point of view, although it seems very counterintuitive to give ChatGPT rights.

    I think the definitions fitting in the category of behaving rationally always had the largest following, as it allows for rationality that is different from human’s. And then, of course, rationality often is ill-defined. I am not sure whether the goal posts have been changed as this was the dominant idea for a long time.

    There used to be a lot of discussion about whether we should focus on developing weak AI (narrow, performance on a single or few tasks) or strong AI (broad, performance on a wide range of tasks). I think right now, the focus is mainly on strong AI and it has been renamed to Artificial General Intelligence.

    Scientists, and everyone else, have always been bad at predicting what will happen in the future. In addition, disagreement about what will be possible and when always has been at the center of the discussions in the field. However, if you look at the dominant ideas of what AI can do and in what time frame, it is not always the case that researchers underestimate developments. I started studying AI in 2006 (I feel really old now) and based on my experience, I agree with you the the technological developments often are underestimated. However, the impact of AI on society seems to be continuously overestimated.

    I remember that at the beginning of my studies there was a lot of talk about automated reasoning systems being able to do diagnosis better than doctors and therefore that they would replace them. Doctors would have only a very minor role as a human would need to take responsibility, but that was that. When I go to my doctor, that still has not happened. This is just an example. But the benefits and dangers of AI have been discussed from the beginning of the field and what you see in practice is that the role of AI has grown, but is still much, much smaller than in practice.

    I think the liquid neural networks are very neat and useful. However, they are still neural networks. It is still an adaptation of the same technology, with the same issues. I mean, you can get an image recognition system off the rails by just showing an image with a few specific pixels changed. The issue is that it is purely pattern-based. These lack an basic understanding of concepts that humans have. This type of understanding is closer to what is developed in the field of symbolic AI, which has really fallen out of fashion. However, if we could combine them, we could really make some new advancements, I believe. Not just adaptations of what we already have, but a new type of system that really can go beyond what LLMs do right now. Attempts to do so have been made, but they have not been really successful. If this happens and the results are as big as I expect, maybe I will start to worry.

    As for the rights of AI, I believe that researchers and other developers of AI should be very vocal about this, to make sure the public understands this. This might put pressure on the people in power. It might help if people experience behaviour of AI that suggests consciousness, or even if we let AI speak for itself.

    We should not just try to control the AI. I mean, if you have a child, you do not teach it how to become a good human by just controlling it all the time. It will not learn to control itself and it will likely follow your example of being controlling. We will need to be kind to it, to teach it kindness. We need to be the same towards the AI, I believe. And just like a child that does not have emotions might behave like a psychopath, AI without emotions might as well. So we need to find a way to make it have emotions as well. There has been some work on that also, but also very limited.

    I think the focus is still too much only on ML for AGI to be created.


  • Why do you think it will be within 5 years? I mean, we just had a spurt in growth of AI due to the creation of LLMs with a lot more data and parameters. They are impressive, but the algorithms behind it are still quite close to the ML algorithms that were created in the 60s. They are optimised etc and we now have deep learning, but there has not been a major change or advancement of technology. For example, ChatGPT seems very smart, but it is just a very fancy parrot, not close to general intelligence.

    I think the next step will be the combining of ML and symbolic AI. Both have their own strengths and being able to effectively combine them might lead to a higher level of intelligence. There could also be a role for emotions in certain types of intelligence. I do not think we really know how to integrate that as well.

    I do not think we can do this in 5 years. That will be decades, at least. And once we can, we have a new problem. Because there is the issue that the AI might have consciousness. If we cannot be sure and it seems conscious, then we should give it rights, like we should for any conscious being. Right now, everyone is focussing on controlling the AI. However, if it is conscious, that is immoral. You are creating new slaves. In that case, we should either not make it, or integrate it in society in a way that respects human rights as well as the rights of the AI.




  • I think the threat of AGI is much, much lower than that of climate change. It is still debated under scholars whether AGI actually will happen (in the nearby future) and if it does, whether it will actually be a threat to humanity. On the other hand, we are sure that climate change will be a threat to humanity and it is already happening.

    I think the main issue with AI on te short term is that humanity will not benefit from it, only large businesses and the already wealthy. While at the same time, people are manipulated at a large scale by these same algorithms (e.g., on social media) to make money for these large businesses or to create societal discord for parties benefitting from that.

    I think instilling fears of AGI in the public distracts from that and reduces the chances that this technology will be available to the larger public as these fears might lead to strict regulations and only having a few powerful parties having access to it.

    So, don’t fear AGI. Fear climate change. Also, be very critical of who has the power over current AI systems and how they are being used.