2024.05.13 22:51 adventurepaul E-commerce Industry News Recap đ„ Week of May 13th, 2024
2024.05.13 22:46 adventurepaul What's new in e-commerce? đ„ Week of May 13th, 2024
2024.05.13 21:45 Fknrayofsunshine App not found in settings?
2024.05.13 19:51 CanExports Moving files from 'download' NMVe to disk, whilst satisfying 14 day torrent seeding
2024.05.13 13:41 kurlymeister How to seed after I've moved and renamed a file?
2024.05.13 08:48 raly02 Blutopia inactive timeline
2024.05.13 05:09 wilddouglascounty Seen your first lightning bug yet? Also here's what to look for in area prairies and woodlands
submitted by wilddouglascounty to Lawrence [link] [comments]
2024.05.13 05:07 wilddouglascounty Kaw Valley Almanac May 13 -19: hedge apple flowers, prairie wildflowers, lightning bugs and more
submitted by wilddouglascounty to kansas [link] [comments] |
2024.05.13 02:59 RedPeril Just bought vinyl for my plants. No regrets.
submitted by RedPeril to houseplants [link] [comments] |
2024.05.13 02:56 NatalieMaybeIDK Stuck on Mars
2024.05.13 02:29 Academic_Law4109 how to randomize co-op
2024.05.13 02:04 _ButWaitTheresMore_ hypothetically, if i download a torrented file and stop seeding afterwards will having the file get me caught?
2024.05.13 00:30 Pierowtechneeks Qbit-nox d/l appears to hang after some time
2024.05.12 23:01 Special_Necessary_78 monerod "Host/subnet blocked" issue
2024.05.12 22:14 Excellent_Disk8673 What happend ?
I was letting it download while doing chores and I came back to thisđ submitted by Excellent_Disk8673 to PiratedGames [link] [comments] |
2024.05.12 22:11 nickchecking utorrent periodically sticks at 0% while downloading new torrents
I'm using 2.2.1 utorrent on Win7/32GB RAM (I know, I know, I'm just resistant to change and it's worked for a decade) and over the last few months I've hit this issue where every once in a while (sometimes within a few days, sometimes it's okay for a couple of weeks), it'll just refuse to actually start downloading new torrents. It'll say Downloading but won't connect to the peers I know are there and just say it's at 0%. submitted by nickchecking to torrents [link] [comments] https://preview.redd.it/w3jik44pw10d1.png?width=490&format=png&auto=webp&s=3d79dd806c3c23951476ca0d26c9d67c66d0f4c7 I'll add the torrents to utorrent, they'll queue up (I have it set to download one at a time), but then it'll just act as if there are truly no seeds and start the next torrent and then the next, until there are a bunch of torrents at Downloading - 0% that I know all have seeds, from private trackers I've used fine for years. Crucially, when I restart utorrent, they work fine. Which to me rules out issues of no seeds, blocked ports, etc, unless it's a setting that's changing after utorrent starts. It's just getting to be a chore to keep restarting because I have 2,800 torrents seeding and it has to go through all of them again and move from Queued Seeding to Seeding. In terms of what's changed over this time, I'd say I've started torrenting more instead of getting from IRC over the last couple of years, so the number of torrents loaded has climbed quickly. I go through and delete a bunch every once in a while but at the moment it tends to stick around 2800~. I did download and am still seeding a few torrents of around 400GB a few months back, in case that's somehow overloading something? I use VyprVPN but only for some sites, so it's usually running but disconnected. I also installed NordVPN a while back but don't use it, so it's usually not even running, though I guess the service is. I stopped the service just now but no change on the current stuck downloads. Also, sometimes, though rarely, after being stuck on the downloading, a few minutes or hours later it'll just start working by itself and finish quickly. ETA: Oh, also, while I was downloading those large 400GB torrents, I kept on hitting the Disk Overloaded error and so made some changes I saw online, mostly here: https://preview.redd.it/wu7w6fb1020d1.png?width=500&format=png&auto=webp&s=112949badb2b026cb7d655c23e47872e1ef0a9db Also set bt.transp_disposition to 21 diskio.coalesce_write_size to 3097152 diskio.sparse_files to true diskio.use_partfile to false Now I'm wondering if I should just revert all of those... |
2024.05.12 21:09 Walmitty Seeding and Leeching ...
2024.05.12 20:34 OkNefariousness7651 advice for restoring/recreating distros that have been deleted from a (non-anglophone) private tracker for lack of seeds since i was last able to seed them?
2024.05.12 19:00 Ok-Vacation5730 Creative upscaling all the way to 16K (and beyond) with WebUI Forge, a comprehensive how-to guide
WebUI Forge is a popular alternative (âforkâ) to the classical Automatic1111 platform. In this guide, I describe a complete routine for super high-resolution upscaling of AI-generated images with incremental adding of detail into their content, using any of the two features of Forge, SD Upscale and MultiDiffusion integrated. submitted by Ok-Vacation5730 to StableDiffusion [link] [comments] Due to reddit limitations, the included 16:9 demo image of a fantasy landscape is a scaled down to 8K version of the full-size upscaled image produced with this routine from an original 2K picture generated with Leonardo.ai. Hereâs the link to the full-size 15360x8640 image (a 58 MB file). A lighter image of 16 MB size (87% jpeg compression) is available here. The source 2k image is included second in this posting. The folder with the complete selection of demo images from the project prepared for this post is available here. Routine prerequisites: Forge webUI running locally on a PC equipped with a capable GPU (RTX 4070 Ti Super with 16 GB, in my setup), or on a leased one in the cloud (RunDiffusiion, salad, runpod.io, vast.ai, sailflow.ai and the like). The author used the WebUI Forge version available within the StabilityMatrix package. The approach In the image upscaling business, the temptation is to upscale the image to the target resolution in one go, if possible: if the tool supports a 4x upscale, sure, letâs use that! 8x, even better! It is however a very flawed approach, all you will get is a ruined image (or at best something far from what you wanted the result to be), and running times much longer than they should be. With this routine, I promote an incremental approach, where you increase the resolution 2 times at most with each upscale step, keeping the runtimes short to allow for more creative experimenting. The underlying idea is that you want to be in charge of the process and not rely on the magic of the AI too much, or on some dense, rigid workflow. For that reason, I am much against upscaling in batches or in whichever automatic fashion. Each image is unique and requires an individual approach; and especially so, for anything to be called a work of art. The steps in detail Each step in this routine is done in 2-3 substeps. Substep 1: pre-upscaling of the image. During AI-assisted upscaling with SD Upscale (SDU) or MultiDiffusion integrated (MD), the actual resizing of the image is always done by an upscaler model, not by an internal routine of the extension or the script (I will call them âmethodsâ from here on). Typically, the user simply chooses the Scale By factor, selects one of the available in the UI models, like UltraSharp, foolhardy_Remacri etc, sets all the other relevant upscaling parameters and clicks on Generate. In contrast, in this routine the ârawâ upscale operation is necessarily a separate substep; itâs done explicitly in Forgeâs Extras, or using a standalone upscaler such as the highly recommended freeware upscayl, before proceeding to generate. This allows you to exercise a fuller control over the overall look of the eventual upscaled output, the grain or the texture, and also to prevent an unsolicited color shift. If you donât separate this part, it will be impossible for you to determine in what degree the upscaler model influenced the eventual output. Forge comes with plenty of top-class models already built in, and you can always look for a specialized one fine tuned for the type of images you need to have upscaled (hereâs the site that hosts practically every sort). For more details on the subject, see this recent discussion. This approach of separation not only gives you more control of the output, but also significantly saves on the total computing time: once you have a raw-upscaled image with qualities closest to the way you want the upscaled image to look ultimately, the pre-upscaling operation wonât have to be performed at the beginning of every subsequent run of the method (and chances are that you are going to do a lot of trial runs at each upscaling step). Conversely, if the pre-upscaled image has a texture with a particular strong (synthetic) grain, or any artifacts of its own (which does happen occasionally), it might be difficult to alter that look within the adding detail run (substep 2), so choose wisely! (FWIW, the models used for upscaling the demo image for this routine were 4xLSDIRplusC, HAT-L_SRx4_ImageNet-pretrain and 4xHFA2k, they seemed to be suited better for the type of synthetic landscape photos used in this project than the others I tried; but thatâs only my personal preference. In contrast, 4x_NMKD-Siax_200K proved unsuitable for this project, as this model tends to add too much noise in the upscaled output, making it appear unnaturally sharp.) Note: the pre-upscaling factor of 2x at each step is only a default, for an even finer control, you might want to reduce it to 1.5 or even lower. Timing. Pre-upscaling is usually done very quickly, it takes about 20-30 seconds for an average model to 2x upscale a 4K image on my RTX 4070 Ti Super. 2x upscaling of an 8K image might take from 1.5 or 2 minutes (4xHFA2k) to anything between 4 and 12 minutes using a slower model like SwinIR or HAT-L_SRx4. Substep 2: adding detail to the pre-upscaled image with the chosen SD method and checkpoint. At this step, you use the pre-upscaled image as the source in the chosen Forge img2img method and run the process with the scale factor of 1. Depending on the Denoising strength parameter, the checkpoint used, the prompt etc, this generation substep will add a variable amount of detail into the rendered image (see more on the ways to control the amount and contents in âForge webUI parametersâ below). For the demo image used in this guide, the details added have been of this variety: birds flying in the sky, features of the spire-shaped towers on the left and on the right, waterfall shapes, houses, flower petals, figures of people at a distance, sometimes a bicycle or a horse, and even a tiny village on the cliff that was formed during subsequent upscale steps. By experimenting on each step, you arrive at a rendering that contains newly added detail in the quantity and appearance which is most appealing to you. Once gotten that, you use the rendered image as the source for the next upscale step. Amazingly, as I witnessed more than once in my project, and as opposed to a more traditional img2img process, such an iterative process wherein an image is generated and then used as the upscale source again and again, will not cause any degradation of the subsequent output (when generated with the right set of parameters of course). The detail added on a detail-enriching step will usually be kept and further developed at the next one, given a carefully tuned set of parameters, and the sharpness will be retained without any visible softening of the texture. Keep in mind though that processing time increases with each upscale step and the increased resolution, so it makes sense to do the most of the creative experimenting in the middle of the routine, at the level of 4K and 8K. Generating at the last 16K level should be done just to keep the detail already introduced, not to add any substantially new one (unless you have a super-fast GPU, of course). Timing. Forge runs very, very fast when img2img-processing, with either of 1.5 or SDXL versions of checkpoints, processing a 4k image in about 1 minute, for either method. It takes between 2 and 3 minutes to process a 8K image, and between 8 and 18 minutes for a 16K image, with all the right parameters set. (Automatic, in comparison, takes anything from 1 to 3 hours to do the same with a SD 1.5 checkpoint, and the quality of the output is much harder to maintain.) Substep 3 (optional): refining and fixing artifacts. You should be mindful of possible artifacts (small defects and off-color patterns), and particularly of visible tiles and seams that tend to appear in the generated image when you fail to moderate the process by means of lowering the denoise parameter and/or using the ControlNet Tile mode (more on this in the next part). Probability of the tiles becoming visible is also dependent on the image contents: images with a light blue sky or a smooth gradient of any kind are particularly vulnerable, as demonstrated by the example images in the demo folder. I learned the hard way that, except in a few cases, itâs practically impossible to get rid of the seams by any post-processing. Generally, you will have to discard an image with tiles and seams too prominent. Speaking of visible tiles and seams, I noticed that the MD method is more prone to that issue than the SDU one, while not being any faster in processing, so I recommend using the latter for most use cases. That said, the artifacts such as visible tiles and seams, as well as minor blemishes, can be made less pronounced (if not completely removed) by running the substep 2 again with the just-rendered image as the input one, with a different checkpoint, the Denoising strength or CFG parameter adjusted. This substep is only needed if the upscale step is the last one in the sequence; otherwise, the substep 2 on the next level will most likely do this job. Also to consider: activating the Refiner option at the above substep, to be used with a secondary checkpoint (see below). Check out the demo folder for the most striking examples of tiles, seams and other artifacts. Forge webUI parameters Stable Diffusion checkpoint The choice of checkpoint is a major factor influencing the detail that will be added to the image during the generation process. Different checkpoints react differently to the input material, some hallucinate more readily with the same source than the others. The checkpoints used to produce the demo image in this guide were: albedobaseXL_v21, juggernautXL_v9Rundiffusionphoto2, sleipnirSDXLTurbo_v125, leosamsHelloworldXL50GPT4V, for the SDXL version, and juggernaut_reborn, photon_v1 and absolutereality_v181, for the v1.5 one. Since upscaling, as described in this guide, is a multi-pass process in which the checkpoint is freely changed at each step, all of them contributed to the final result to varying degrees. Check out the demo folder for the most striking examples of hallucinations I encountered during this project. LoRa Specialized LoRas can also be used to influence the type and the amount of the detail added to the image, or the style it is rendered in (this is most likely how Magnific, Leonardo U-Upscaler and others of the kind support various styles available in their UI.) No LoRas have been used in this project, however. Prompt Compared to other SD-based solutions, the prompt plays a much lesser role in this routine. In fact, you can use the same prompt for each upscaling project, something like âmasterpiece, best quality, highresâ, and that will do it. As a rule, no specific terms or object names should be used in the prompt. The reason for that is that the large-size upscaling is always a tile-based process, wherein each tile is generated independently of others, so, if your prompt includes some specific object you want to have generated in the output, there is a high chance that that object will appear everywhere in the picture, or at least at every spot where the checkpoint âthinksâ it is appropriate. This holds especially for the MD-based upscaling with 1.5 checkpoints. For SDU under Forge using SDXL ones, it is less extreme, but I would still recommend avoiding specifics in the prompt. In this project, I went no further with the prompt than using âmajestic fantasy vista, cinematic, high contrast, highly detailedâ. Just inserting âmountainousâ adjective before âvistaâ would cause mountains rendered in the dark parts of the sky and other strange features appearing in the picture, without much added realism. In any event, due to the nature of the Stable Diffusion process, it is not possible to control what exactly the checkpoint will inject, and where, even with the best crafted prompt. You can, however, try restricting it from injecting something you donât want it to, by including various synonymous terms in the negative prompt. Sampling method (âsamplerâ) In this project, I consistently used two classical samplers, DPM++ 2M Karras, and, a bit less often, DPM++ 3M SDE Exponential, they seemed to be the fastest of the bunch and delivered desired quality with a relatively low step count of 20-22. Some other samplers, like Euler and HEUN, proved to be too eager to hallucinate bizarre detail into the image (with the same step count), so I avoided them; some others produced completely damaged output or were unacceptably slow. Sampling steps With a sampler chosen as above, the step count of 20-22 was sufficient most of the times for a good output quality. Occasionally, I would raise the count to 25 or 30, or even 50, but could never notice much difference in the output. Increasing the step count does make the generation take longer though, in almost linear fashion. CFG Scale The CFG parameter has a major influence on the output, augmenting the hallucination as you raise it. With that, however, also rises the chance of tiles and seams and other artifacts appearing, so itâs a good idea to be conservative with this value. As a rule, I would use a CFG no higher than 8 for a regular detail-adding generation, and restrict the value to 5 at the last (16K) step, to avoid undesired detail injection, while keeping the overall sharpness level. With the MD method and 1.5 checkpoints specifically, raising this value to extreme levels such as 13-15 and setting tile dimensions just above 100 pixels will cause a profuse injection of the sharpest detail possible into the output, which can often produce a stunning effect, but is generally hard to control (not to mention additional artifacts). Denoising strength The denoise parameter which is at the heart of this âcreativeâ, img2img-based upscaling routine is the single most influential factor; you should use it within a pretty narrow range. The CFG, the sampler, the step count, the checkpoint and of course the prompt all play their role in the process, but the denoise value leads the way. Experimenting with it, you nudge the img2img generation to add a desired (relatively small) amount of detail, and no more. If you donât restrict the value of this parameter, your image will contain a wild (âinsaneâ) amount of detail, particularly when upscaling with the MD method, which tends to insert in the image, depending on the checkpoint, all kinds of stuff (NSFW one including) at any spot imaginable. That is often accompanied by tiles appearing in the output image and other artifacts - which means, again, wasted time and effort. But visually, it can be great fun of course. Recommended values (based on this project): between 0.28 (basic detail level, low hallucination) and 0.38 (new detail is prominent, checkpoint-dependent hallucinations across the image). The lowest level of 0.28-0.30, or lower, is recommended to use at the last, 16K step. See also ControlNet integrated below on how to dampen the effect of the denoise parameter and keep the output faithful to the original image. Resize to, Width and Height / Resize by, Scale This must be the most mystifying part of the entire Automatic/Forge interface. After all this time, I am still figuring out, shall we say, the subtleties of its logic. The sliders labeled âWidthâ and âHeightâ play a different role under img2img-based upscaling than in a regular image generation. When upscaling with Forgeâs SDU, with these sliders you donât set the resolution you want the image to be upscaled to (it wonât allow values higher than 2048 anyway), but rather the tile dimensions used for upscaling/refining, see Tile configuration below. When running SDU script under Forge, make sure that the Resize to (NOT Resize by) dialog box is in foreground before clicking on Generate, or else it will take very long to process the image at 8K, and forever, at 16K. The imageâs target dimensions are defined in SDU via its internal Scale Factor parameter. In contrast, when upscaling with Forgeâs MD, before clicking on Generate you need to ensure that the Resize by dialog box is in foreground, with the Scale factor set properly, or else it will just generate an image of whatever dimensions set by the sliders in Resize To, but luckily, in just a few seconds. For the purposes of this routine, the Scale can be set to 1. And that is the simplest part of the puzzle, all kinds of things can go wrong if you set something in the UI that Forge developers didnât really anticipate. To avoid excessively long runtimes and other pitfalls, follow the guidelines below when upscaling. Tile configuration Image tiles are the core part of the design of HR upscaling in Stable Diffusion, I believe itâs the only effective means to process large size images without running out of GPU memory (VRAM) very quickly. In SDU, you set tile dimensions, as suggested above, in the Resize To dialog box. For SDXL checkpoints, I used the 1024x1024 dimensions, as well as 768x768 (the tiles donât have to be aspect ratio-shaped), and for 1.5 checkpoints, the standard 512x512 dimensions, or 768x768, which worked equally well. An important parameter in both methods used in the routine is Tile overlap, it defines the pixel width and height of the overlapping area of adjacent tiles. Making it as large as reasonably possible helps to tame the visibility of tile seams, by the price of slower computation. In my experience, an overlap of size 64 pixels for SDXL would suffice; smaller sizes could make sense for 1.5 checkpoints and in cases when visibility of tiles is not an issue. In any case, itâs a good idea to use the default value first. The MD method, which includes its own set of sliders to define the tile configuration, has an additional parameter, Tile Batch Size. It defines how many tiles are held in memory simultaneously and processed in one basic operation; 8 is the maximum using which will supposedly achieve the highest speed of upscaling, lowering it will decrease the amount of VRAM used by this process. In MD under Automatic, setting this parameter lower than the default (to 4, 5 or 6) is an essential means to avoid running out of CUDA memory; in Forge, itâs of a lesser importance, since that system has its own, by all indications much more efficient memory management. ControlNet integrated To keep the upscaled/refined output as faithful to the source image as possible, ControlNet Tile resample mode is used. When this mode is activated, the effect of the Denoising strength parameter is dampened, which gives you a higher degree of freedom to play with the parameter without the associated risk of tiles and seams appearing, but at the price of longer (about 10-30%) processing times. In my experience, with the Controlnet Tile resample switched on under SDU, the Denoising strength could be set as high as 0.4, with no or little visible artifacts appearing in the output image. To engage this option, enable ControlNet Unit 0 in the Forge img2img UI, check Pixel Perfect, select Tile in the Control Type combo, select Tile resample in the preprocessor dropdown box and the corresponding model in the next box, which is usually control_v11f1e_sd15_tile when using a 1.5 checkpoint and ttplanetSDXLControlnet_v10Fp16 when using a SDXL one (the only one that worked for me, might require explicit downloading and installing to Forge). Next, set Control weight to a value between 0.5 and 0.7 inclusive (this relaxes the ControlNet Tile fidelity, which we need for the purposes of detail-adding), and leave the rest of the ControlNet settings at the default. Note that the MD + ControlNet mode combination, as I found, doesnât really work under Forge: when both are selected, the process upscale starts quickly but then stagnates without any visible progress, for hours. (In contrast, the same combination works just fine under SDU, for both 1.5 and SDXL flavors.) Other parameters in Forgeâs img2img interface Clip skip. Leave at the default. Changing this might make no impact whatsoever, I have never checked. Resize mode. Leave at Just resize. Refiner. This is an interesting option worth experimenting with. It allows the user to select a secondary checkpoint whose output will be mixed with that of the primary one, at a selected point of processing specified by the Switch at parameter (reasonable values between 0.6 and 0.85). Unfortunately, while potentially useful from the creative perspective, this option is too computationally costly, slowing down runtimes to anything from 2 (for SD 1.5) to 8 times (for SDXL checkpoints, which are larger). This happens due to constant checkpoint loading and unloading (a rather time-consuming operation) performed for every tile being processed. Batch count / Batch size. Leaving these at 1 would be a practical choice, to avoid wasting your computing time - unless you want, say, to experiment with the Denoising strength at its higher values when upscaling at a low to medium image resolution. Seed. Usually left at -1 to allow random variation of the output. MultiDiffusion integrated. Enable this control when you want to upscale with this particular extension, as opposed to the SD upscale script. The choice of the specific Method between MultiDiffusion and Mixture of Diffusers does not affect the output that much (in my experience anyway), I understand it was retained in Forge for backward compatibility with MD under Automatic. An important related option: see Never OOM integrated below. Never OOM integrated: when selecting MD extension for upscaling, it is necessary to synchronously enable this extension as well, and check the box labeled Enabled for VAE (always tiled), and not the other one above it. If this is not done, upscaling of a large sized image will last indefinitely long. In contrast, upscaling with SDU necessitates unchecking of that option and leaving Never OOM inactive, for exactly the same reason. Script. Select SD upscale form the dropdown box when upscaling with SDU. In this case, MultiDiffusion integrated must be deactivated, or else neither of the two options will work properly. Also, when selecting SDU, make sure to deactivate Never OOM, as mentioned above. The Upscaler choice is up to your experimentation (see Substep 1 at the top), but for the purposes of this routine it is normally set to None. All the other integrated extensions present in Forgeâs interface are best to leave inactive. Forge vs Automatic1111, SD 1.5 vs SDXL and the fate of Tiled Diffusion Based on my experience in this upscaling project, a few general conclusions can be made. For the purposes of the project, Forge WebUI proved a much better choice, with either of the two methods: it runs significantly faster and its output is much less prone to visible tiling, seams and other artifacts than when using equivalent tools under Automatic. The MD implementation in Forge, however, is much incomplete, with important features, such as Noise Inversion, slow modes of Tiled VAE encoding / decoding and others left out, as compared to the Automatic version of MD (where it is called Tiled Diffusion btw). In my view, this drawback of Forge is largely compensated not only by faster runtimes but also by much more robust, stable performance, and most importantly much better tile management that practically solves the problem of visible tiles and seams (you wonât even find an option like Seams fix in Forge, itâs done behind the scenes and done exceptionally well), not to mention it allows to run highres upscaling on GPUs with 8 GB VRAM. Whatâs more, support for v1.5 checkpoints in Froge feels somewhat incomplete, at least speaking of the MD/TD implementation. All in all, however, I feel that the MD/TD method is no longer relevant for HR image upscaling, seeing the drastic improvement of the SD upscale script. From my upscaling perspective, unless Automatic will be merged with Forge to take advantage of the improvements in the latter, the former remains relevant only for MD/TD-based upscaling with 1.5 checkpoints. It doesnât help either that development of TD/MD has remained dormant for the last two months. (Which is sad, since SDXL support in MD, native or under Forge, has not been developed beyond only a nominal one - it doesnât really work well with checkpoints of that version, as I found in the course of my project.) Although, to be fair, an even more worrying picture holds for Forge development, which hasnât seen any update since February. The quality of SDXL-based generation as compared to that of v1.5 is a matter of a separate discussion; due to space constraints, I wonât go into that here. I will just add that personally I find SDXL-based upscaling superior to v1.5 one, for most use cases. The folder with the complete selection of demo images from the project prepared for this post is available here. https://preview.redd.it/3y1nwkod110d1.jpg?width=7680&format=pjpg&auto=webp&s=5c12e51fa2d44bb8336a7ca8c02094b349a61799 |
2024.05.12 14:26 janka12fsdf Is this bad download/upload speed normal?
People always say you should seed to atleast a 1:1 ratio, but my seeding and downloading is so painfully slow that I don't know if I'm doing something wrong? submitted by janka12fsdf to animepiracy [link] [comments] This is a screenshot from a torrent with only 1 seeder, for refence As you can see the download speed is really slow. I know there is only one seeder, but shouldn't theorhetically my download speed in this case be the same as the upload speed of this one seeder? I didn't port forward yet, so I'm wondering if that would fix it? Like I said, my upload speed is also horrible, even tho it says I should have 20 mb/s on ookla speed test. The biggest ratio I have is 0.3, and that's from the most popular torrent I have, so yeah there seems to be something wrong here, sorry for the rambling and thanks for the help! |
2024.05.12 14:23 janka12fsdf Is this bad download and upload speed normal?
People always say you should seed to atleast a 1:1 ratio, but my seeding and downloading is so painfully slow that I don't know if I'm doing something wrong? submitted by janka12fsdf to torrents [link] [comments] This is a screenshot from a torrent with only 1 seeder, for refence As you can see the download speed is really slow. I know there is only one seeder, but shouldn't theorhetically my download speed in this case be the same as the upload speed of this one seeder? I didn't port forward yet, so I'm wondering if that would fix it? Like I said, my upload speed is also horrible, even tho it says I should have 20 mb/s on ookla speed test. The biggest ratio I have is 0.3, and that's from the most popular torrent I have, so yeah there seems to be something wrong here, sorry for the rambling and thanks for the help! |
2024.05.12 14:13 YT_Flex4249 LOST ALBUM Crossing Rain - Dreams