![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/eb9cfeb5-4eb5-4b1b-a75c-8d9e04c3f856.png)
Oh interesting I will have to look how tessaract does it
Oh interesting I will have to look how tessaract does it
What a dumb way to do image proxying.
This is about proxying external images, URL rewrite won’t work unless the image is also downloaded and hosted by the instance (which seems even worse for many reasons).
Or am I missing something here?
Afaik constant use of paracetamol has most impact on stomach, it does something to the lining.
Or maybe that was ibuprofen?
Not a medical professional either way so
Edit: Root cause aside as people have already told you here, I would try CBD it might help and shouldn’t strain the body as much.
So, the word here is parallelism. It’s not something specific to python, asyncio
in python is just the implementation of asynchronous execution allowing for parallelism.
Imagine a pizza restaurant that has one cook. This is your typical non-async, non-threading python script - single-threaded.
The cook checks for new orders, pickups the first one and starts making the pizza one instruction at the time - fetching the dough, waiting for the ham slicer to finish slicing, … eventually putting the unbaked pizza into oven and sitting there waiting for the pizza to bake.
The cook is rather inefficient here, instead of waiting for the ham slicer and oven to finish it’s job he could be picking up new orders, starting new pizzas and fetching/making other different ingredients.
This is where asynchronicity comes in as a solution, the cook is your single-thread and the machines would be mechanisms that have to be started but don’t have to be waited on - these are usually various sockets, file buffers (notice these are what your OS can handle for you on the side, asyncIO ).
So, the cook configures the ham slicer (puts a block of ham in) and starts it - but does not wait for each ham slice to fall out and put it on the pizza. Instead he picks up a new order and goes through the motions until the ham slicer is done (or until he requires the slicer to cut different ingredient, in this case he would have to wait for the ham task to finish first, put …cheese there and switch to finishing the first order with ham).
With proper asynchronicity your cook can now handle a lot more pizza orders, simply because his time is not spent so much on waiting.
Making a single pizza is not faster but in-total the cook can handle making more of them in the same time, this is the important bit.
Coming back to why a async REPL is useful comes simply to how python implements async - with special (“colored”) functions:
async def prepare_and_bake(pizza):
await oven.is_empty() # await - a context switch can occur and python will check if other asynchronous tasks can be continued/finalized
# so instead of blocking here, waiting for the oven to be empty the cook looks for other tasks to be done
await oven.bake(pizza)
...
The function prepare_and_bake()
is asynchronous function (async def
) which makes it special, I would have to dive into Event Loops here to fully explain why async REPL is useful but in short, you can’t call async functions directly to execute them - you have to schedule the func.
Async REPL is here to help with that, allowing you to do await prepare_and_bake()
directly, in the REPL.
And to give you an example where async does not help, you can’t speed up cutting up onions with a knife, or grating cheese.
Now, if every ordered pizza required a lot of cheese you might want to employ a secondary cook to preemptively do these tasks (and “buffer” the processed ingredients in a bowl so that your primary cook does not have to always wait for the other cook to start and finish).
This is called concurrency, multiple tasks that require direct work and can’t be relegated to a machine (OS, or to be precise can’t be just started and awaited upon) are done at the same time.
In a real example if something requires a lot of computation (calculating something - like getting nth fibonnaci number, applying a function to list with a lot of entries, …) you would want to employ secondary threads or processes so that your main thread does not get blocked.
To summarize, async/parallelism helps in cases where you can delegate (IO) processing to the OS (usually reading/writing into/out of a buffer) but does not make anything go faster in itself, just more efficient as you don’t have to wait so much which is often a problem in single-threaded applications.
Hopefully this was somewhat understandable explanation haha. Here is some recommended reading https://realpython.com/async-io-python/
Final EDIT: Reading it myself few times, a pizza bakery example is not optimal, a better example would have been something where one has to talk with other people but these other people don’t have immediate responses - to better drive home that this is mainly used on Input/Output tasks.
Not necessarily a trick that’s always useful but I always forget this.
You can get async REPL by calling python -m asyncio
.
Also, old trick - in need of simple http server serving static files?
python -m http.server
A bit offtopic but either way, have you ever tried applying NN agents on games with incomplete information, card games with opponents and alike?
Afaik you have to replicate the same wave but in opposite “direction” (up/down sinus) to cancel out incoming sound so any anc earbuds have to have microphones and are dynamically shaping the sound.
Stellaris with a friend, too much democracy (Helldivers).
lemmy.one has disabled downvotes, it’s up to admins of each instance if they allow viewing and making downvotes.
Maybe this https://github.com/Alinto/sogo
You know what, I am gonna call skill issue.
I get that the “press R to join a group” can be overlooked or that not everyone has the intuition to click on the active missions on a planet (alright, these are currently bugged and do not refresh quick enough so always full).
But all one has to do is a quick google search to find out you just open the big holo planet and press R, there are also definitely worse offenders in cryptic/useless UIs.
I wouldn’t recommend putting ssh behind any vpn connection unles you have a secondary access to the machine (for example virtual tty/terminal from your provider or local network ssh). At best, ssh should be the only publicly accessible service (unless hosting other services that need to be public accessible).
I usually move the ssh port to some higher number just to get rid of the basic scanners/skiddies.
Also disable password login (only keys) and no root login.
And for extra hardening, explicitly allow ssh for only users that need it (in sshd config).
There are/were two reasons why I did that:
The only workarounds that seem to improve stability involve manually downclocking or undervolting Intel’s processors.
Guess that explains why I haven’t had any unexpected crashes yet with stuff like Palworld or Helldivers 2 (afaik both are made in UE). Have been running my 13900kf slightly undervolted.
Is this post missing something or am I out of date on what is happening
E: Found this https://drewdevault.com/2024/04/09/2024-04-09-FDO-conduct-enforcement.html#fn:1
I don’t use nginx proxy manager but websocket has to be enabled for apps that use websockets (duh) - you would have to dive into docs or example infra configs to check if the service uses it.
Rule of thumb here would be to enable it for everything. Optionally you could check if the service works with/without it.
E: Websockets are used when a website needs to talk in “real-time” with the servers - live views and graphs will usually use it also notifications, generally if the website does not reload/redraw fully but data seems to change then there is a high chance it uses websockets under the hood (but there are ways to do it without ws, ex. SSE).
Example: Grafana uses websockets but qbittorrent web ui uses other means (SSE) and does not require ws.
borg backup with rsync.net
Borg does de-duplication and compression, I’ve used it for multiple things like backing up minecraft servers and it can reduce the final backup size by a lot (like 1-2 TBs to a hundred of GB, though that was with content that was highly compressible and didn’t change much over-time so the deduplication did a lot too).
There is also borgbase.com which looks a bit better and focuses only on borg repositories instead of also being compatible with just about any usual tools (eg rsync, rclone etc)
Keychron should have all your requirements
In what part exactly?
The example is not perfect I can see that myself. If I read into it too much there could be an overlap with concurrency, e.g. the (IO) tasks awaited & delegated to the OS could be considered a form of concurrency but other then that I do think it’s close to describing how async usually works.