

I just had a dig around, the back-end is implemented in Rust.
I just had a dig around, the back-end is implemented in Rust.
Often the question marked as a duplicate isn’t a duplicate, just the person marking it as such didn’t spend the time to properly understand the question and realise how it differs. I also see lots of answers to questions mis-understanding the question or trying to force the person asking down their own particular preference, and get tons of votes whilst doing it.
Don’t get me wrong, some questions are definitely useful - and some go above-and-beyond - but on average the quality isn’t great these days and hasn’t been for a while.
Google’s first quarter 2023 report shows they made massive profits off vast revenue due to advertising.
It is about control though. The thing that caught my eye is that they’re saying that only “approved” browsers will be able to access these WEI sites. So what does that mean for crawlers/scrapers? That the big tech companies on the approval board will be able to lock potential competitors out of accessing the web - new browsers, search engines, etc. but much more importantly… Machine Learning.
Google’s biggest fear right now is that ML systems will completely eliminate most people’s reason to use Google’s search, and therefore their main source of revenue will plummet. And they’re right to be scared, it’s already starting to happen and it’s showing us very quickly just how bad Google’s search results are.
So this seems to me like an attempt to control things from that side. It’s essentially the “big boys” trying to consolidate and firm-up their hold in the industry and not let newcomers rival them, as with ML the barrier to entry has never been lower.
How do Linux distro’s deal with this? I feel like however that’s done, I’d like node packages to work in a similar way - “package distro’s”. You could have rolling-release, long-term service w/security patches, an application and verification process for being included in a distro, etc.
It wouldn’t eliminate all problems, of course, but could help with several methods of attack, and also help focus communities and reduce duplication of effort.
With regards to education, one of the things I’ve come to understand goes entirely counter to the way I was taught at University - for me, programming is a creative activity. It’s an iterative process, and the less constraints I have on how I achieve something, not what I achieve, the better I enjoy it, the more productive I am, and the better by many measures the end solution will be.
I think that is a key part of what’s missing from CS education, to understand that and lean into it to both increase engagement but also to get people thinking outside the box for solutions to their problems. Students seem to be taught so much, but very little about “Here’s a high-level problem, provide a solution” which is the “core loop” of software development (outside of being a code monkey implementing other people’s designs). You go over requirements and specifications, but you don’t actually DO it… you don’t speak to people, ask the questions, realise they’d don’t know much about software, then later go “Oh shit, I made this assumption and made the wrong thing!”
One of the things that I used to like more than anything was achieving things even though there were constraints. For example, back in the 90’s even before even AJAX was a thing, I created a site for a betting company that was a SPA and pulled in data and live betting odds. I did this by having a message queue in JavaScript, a hidden frame from which to send messages from the queue to the server using a form, and then the server returned JavaScript code which executed and put the data where needed and updated the page. I absolutely loved that project, and most people on the team just couldn’t believe it was even possible.
But I didn’t solve it through engineering, I solved it through playing - trying things, seeing what would work/what didn’t, adapting the idea, etc. until I found something that worked - and it was based on some of the things I’d been messing about with in my own time (somewhat bizarrely, creating a sort of online aquarium of Dr. Seuss fish where each one was a person viewing the site!)
I think if we can inject more of the creativity, tinkering, iterative, playful side into our education it’ll make a huge difference.
I left University in the late 90’s and got my first job based on the things I’d been messing about with in my spare time with the University’s facilities/at home (Unix, Internet protocols, client/server arch, distributed computing, etc.) rather than anything I’d been taught. I learnt more in my first 3 months in work than 3 years of education.
Then the dot-com boom hit, and the number of applicants for any position surged - everyone was going into software development for the money. The whole team became involved in selecting candidates and being part of the interviewing process - it was a nightmare trying to give every person a fair chance. We had some good hires and some bad hires, but the bad hires became such a problem because we had to go through the recruitment mill again.
But we realised that the number one factor for whether they’d be a good hire or not was not education, but their own personal projects. That’s what mattered. Doing this for fun was the key indicator of being good, and became the ONLY thing we looked for on CVs in the first pass. Doesn’t matter if you have a 1st from Cambridge, if you don’t demonstrate you have a passion for the subject, you don’t get an interview. It was a huge success, and we built an amazing team and saved ourselves a ton of time during recruitment.
Those people still exist though, I see it all the time! But I think now that the “industry” has grown so much that in any given field there are less people (relatively) being attracted to it. For example, I can see that while back in the 80’s I was drawn to the personal computer, then the 90’s the internet - those things are staples of everyday life now. But I can see more modern young people being attracted to things like AI, drones, quantum computing, 3D printing, and so on as well.
This is a truly excellent pair of articles, brilliantly written.
Explains the problem, show the solution iterating step by step so we start to build an intuition about it, and goes as far as most people actually need for their applications.
There’s more! Well, it’s more a bash thing than a cd thing… in bash the variable $_
refers to the last argument to the previous command. So you can do the following:
> mkdir -p my/nested/dir
> cd $_
> pwd
/home/user/my/nested/dir
It’s handy for a whole host of things, like piping/touching then opening a file, chown then chmod, etc.
Which follows the similar functionality used by the cd -
command to switch to the previous directory you were in. Very handy!
I’m new to it too, I’ve known about its existence but have been thinking about adding support for it to a project I’m starting soon - really to learn more about it (I tend to learn best by doing!)
It’s goal is for each of us to have personal ownership of all our data online, and full control over who can access what. That’s certainly something I can get behind! You do this by creating a “pod”, which is essentially a database of all your data (I think organised into groups, e.g. each organisation can have their own group of data), which you can self-host if you like, along with the ability to control access.
It’s current impact I would say is near zero. But TBL is a person with a reasonable amount of pull, and he’s setup his own company providing commercial services (presumably, consulting). My guess is they’re dealing with governments and mega-corps - there seems to be very little effort pushing it to “the masses” (i.e. application developers).
The theory sounds interesting but the practicalities of it seem to offer a lot of challenges, so I think the best way to get a real sense of whether it has legs or not is to build something!
He’s pushing for a decentralised web, he’s specifically focussed on personally owned data through his Solid project. But it feels like maybe this month or so could be a tipping-point, so it would be great to get his input and/or for him to see how we all work away at it!
Tim Berners-Lee would be interesting I think, given the direction he’s gone into personal ownership/control of data.
That makes sense, thank you. Yes, it’s specifically “test quality” I’m looking to measure, as 100% coverage is effectively meaningless if the tests are poor.
I use coverage tools like nyc/c8, but I can easily get 100% coverage on buggy, exploitable, and unstable code. You can have two projects, both with 100% coverage, and one be a shit show and the other be rock solid - so I was wondering if there’s a way to measure quality of tests, or to identify code that really needs extra attention (despite being 100%). Mutation testing has been suggested and that’s really interesting, I’m going to give it a go tomorrow and see what it throws up!
Is there a way to block a whole domain on Lemmy? I’ve blocked the user, but it’s interesting that the whole domain is the same crappy generated stuff. It’s so bad it’s bordering on being a hilarious parody of LLM’s, but doesn’t quite make it and so should be scrubbed from the Internet.
This is really interesting, I’ve never heard of such an approach before; clearly I need to spend more time reading up on testing methodologies. Thank you!
Yep… I can get you 100% code coverage of a bug-laden, exploit-ridden piece of software effortless. It’s a useless measure.
This is surely AI generated, but even so it’s still awful and a decade or more behind the curve of what I’d expect from AI blog spam!
This is on my list to do - if you find a good solution do let us know!
I was thinking of just doing the quick-and-dirty approach of appending the data to a file in the repo and auto-committing it. Just have some previous commit information, test name, and results appended every time. That way the head always has the full history of data in order so you can just push/pull that into anything and analyse/graph it without messing about.
I’d probably only do it on push/PR merge so in the grand scheme of things would never really become a lot of data, but you could truncate it as you go easy enough.
I switched to Traefik as it has auto-configuring for containers for effortless deployment to any of your environments (dev, test, staging, production, etc.) either manually or straight from CI/CD.
The way it works is that you put any configuration in your compose file which is then picked-up by Traefik when its deployed - it reads the config, re-configures itself accordingly, and you’re done! So all your reverse-proxy config, cert config, etc. is all with the project so aren’t going to get out-of-sync.
Just keeps things really clean and simple. Plus it’s a great reverse proxy of course with tons of features, nice admin dashboard, logging, etc.