Amusing site

Hey Guys been awhile since i went on the offensive ragtagging the remarkable sites i found regarding DC++ development  well this time i found a site called tell the users that updating isnt a good thing that the user should not do it..

“Sometimes upgrading to a newer version can be a good thing. Other times, your computer may not be compatible with the new version, the new version is bloated, or all the options you liked are no longer available.” –

Their latest recommended version is 0.674 and the registrant is godaddy guess that means gofigure since its probably not godaddy thats behind the site just as a domain registrant.. now we do not recommend any user to actually take their advice and not update since that can resolve in alot of unwanted issues like tth corruption, openssl exploits, adcget exploits, etc etc.

“We believe that every computer user has the right to use a version of the product that he or she is most comfortable with, not the one dictated by the software developer, so we provide access to the files that are no longer obtainable.” –

in anycase i just wanted to show it of since it was funny didnt think that such sites existed, sure its good to have a mirror site but we do however want the mirror sites to be updated if they wanna host older versions fine but don’t tell em not to downgrade to em. Our co-lab with filehippo is a fine example of how we want it to work so our hats goes of to em.

if they was serious about mirroring they could atleast update it to 0.770 instead of saying that 0.674 was the current version..

About Wicked World Games
Based out of Gothenburg Sweden, Wicked World Games 1.1 is a small independent RPG company founded in 2006.

9 Responses to Amusing site

  1. vortexvandel says:


    V .306 and .674/673 etc are used by the lanning community as hashing 4 to 8 tb of files is fairly typical and can be tedius. Also to maintain gigabit speeds it’s usually untill reciently not all that possible with most sub $1000 pc’s (this was true up till about a year ago. the new i7’s etc can usually keep up.. then it’s just a matter of having a decient raid/io on your motherboard or a decient raid card etc).

    Also storeing hashes for multiple tb of data uses a fair amount of memory which when playing games at the same time slows alot of avg pc’s down or just causes too many io bottle necks.

    Again not every one has 4gb or more ram or a quad core pc.

    If the more upto date versions of your software had a no hashing option or a compatability mode to talk properly with .306 you’d find these issues not happening.

    We usually run 306 or 673/674 etc on cisco gigabit networks and find no data corruption issues as a propper network has that stuff built into it anyway.

    • Toast says:

      So does that justify foregoing security aspects just for a comment here ?

      We do not support nor condone usage of older versions in any form or shape since we want Direct Connect to improve..

      • T Rex says:

        The Reduction in throughput and the increase in CPU Usage is a Dealbreaker. People who use DCPP at LANs really need two things
        * an indexing service
        * throughput
        latest versions of DCPP do NOT meet these requirements, and are therefore not suitable. I agree with you it would be highly preferable to run the latest version, however it’s simply not the correct tool for the job.
        Clearly your development focus is on internet use, however given the usage of dcpp at LANs this puts you somewhat at odds with a nontrivial portion of your userbase.
        The LANning community WANTS to use an updated version of DCPP, they don’t do so not because of pettiness or spite or laziness, but because the old version provided a feature that is seen as critical, and the new one doesn’t.

  2. vortexvandel says:

    Your comments about why a person would want to use an older version of your software seemed confused. I am just telling you why some people don’t use the style of program (note I did not say version). As the old style avoids issues with large file collections. Ftp for example is quite robust and not really seen as insecure. the only issues are that it has no easy way to share a file list and search with out connecting to the server. how ever it’s still used frequently all over the net. it handles a need that it’s users don’t need to replace. The same goes for the older versions. If the newer software revisions of DC++ allowed it to talk to non hashing clients then this issue would not arise.

    Yes I know the arguments for hashing vs non hashing. How ever alot of them are not required when you run DC++ only on a local lan. As for security. Depending on what you exactly mean by that. (either protocal security to stop spoofing etc or remote exploits) It can either slow transfers down and not really add that much extra redundancy.When the network is purposely designed to handle fast speeds to each client (unlink an isp which usually drops packets all over the place to limit speeds to clients) and integrity checking is inbuilt the extra overhead of more hashing is pointless or not really worth it.

    YES on the internet it’s a really good idea to use hashing to make sure you don’t get somthing you don’t mean to download or at least easily recover from a weird transfer glitch. I’m all for hashing in that way. However it has been stated by developers to remove a users choice in the matter and to force hashing. This is just plain stupid and somthing I and a few others are suprised with. I doubt many people will really respond or add comments on this as most people will just go “oh well.. I’ll just stick with .306 or 673 and deal with the slight bugs that usually appear.”. Also it dosn’t help that the hashing algorithms seem to change alot (I lost count at about 3 or 4 times since .306) and makes old versions stop with newer ones etc. That’s your price of progress then so be it.

    I’m not having a go by the way. just trying to point out why the older versions are still around.

  3. Fredrik Ullner says:

    The largest problem with “creating a version that doesn’t use hashing” is that hashes are quite ingrained in how DC++ communicates internally as well as with other clients. This is most likely the reason no other client (that I’m aware of, at least) have implemented this. Feel free to submit a patch that would turn off hash support in DC++, but I can’t tell you whether it’d be approved or not. Most likely not, but at least you’re giving other people information about how they can do it, as well as possibly providing a binary that they could use.

    If you are having memory or CPU issues regarding hashing, you might be well off reporting it as a bug if you can provide reproductive steps if there are memory leaks or wild CPU use etc.

    The hashing algorithms have stayed the same, as far as I know. When and how did they change?

    I don’t see how using hashes could slow down your connection… Could you elaborate?

    Have you considered physically moving the hard drives? This has worked similar or better (in my experience) than using DC or any other protocol or network, for that matter. (Granted, in a large LAN network, that might not be feasable.)

    (Disclaimer: I don’t have much development power these days of DC++.)

    • vortexvandel says:

      try searching through the various release notes. I’m sure it’s in there some where about the hashing changes.. perhaps I exaggerated.. I do remember it chainging at least once (406?) to make it not work with older hashing clients. If you really want I’ll go searching for it.

      “memory or cpu issues” relate to the way the hashing is implemented. As I’ve allready stated. if you have a decient HDD I/O and cpu on your motherboard or raid card (Q6600 or i7 920 with a perc 5i with battery back up and 256MB cache) you will not see any real kind of slow down but that sort of hardware is a base of ~$600AU usually new. not the kind of money most mid to low systems will spend on this when they could instead spend $300 and $300 on a gfx card..

      it’s not so much memory leaks but the kind of use the program has at lans. most people bring one pc that they use to game and dc++ at the same time.

      once you start transferring data at a constant speed of >30 to 60MB/S sustained the requirement of hardware goes up alot which is a direct result in having a hashing client.

      Most people keep pc’s for more than 6months or a year at a time and it’s only been about the start of this year that this issue has become better as the cost of relatively powerfull hardware has come down.

      And A patch is not the issue.. it’s a design philosophy that has caused this issue. I’m curious as to what the reason for internal hashing is? I know data integrity is important for internet and staying in sync with other clients is a nice idea but it would be nice to have a legacy mode.

      physically moving the hdd? do you mean moving the download and upload directory so that disk I/O is spread out more? The lans I am talking about have approx 300 to 500 players so physically disconnecting a HDD is a pain and a waste. usually if people are using DC++ at a 400 to 500 player lan you will see about 1PB (yes peta byte of data being transferred during the lan and most of the clients are not the newer clients being used.

      Hashing really will slow down transfers when you go above about 30 or 40MB/s sustained in most pc’s (pc’s older than 2 years for example, single and dual core). it’s not directly a cpu issue but usually a bus or general I/O issue in cheap pc’s. You download a chunk of data, calculate HASH, mean while you are still downloading data and your network buffers fill up. then your disk buffers fill up to cope with the downloading from a Gig network and end up slowing the whole system down. if you are lucky you realise this and just leave the client alone,not play any games at the same time and hope it won’t crash. 0.306 copes with this fairly well by not having the hashing step that causes as many bottle necks.

      also indexing.

      Indexing 2TB+ of files also takes up 1.time and 2. memory. each time you download a file you need to search the index for a match, (relatively minor I know) calculate a new hash for the data block you’ve just grabbed, compaire it to the block hash, update your memory index with this data for files in transit (files being served and files being written) it all adds up and unfortunately causes slow downs.

      weather or not DC++ keeps all hashes in memory at the same time I personally don’t know. If you are sharing 2TB or more of data and your client has to every now and again read the hashes off a data base some where then that will add to a slow down.

      Yes. I don’t expect new dc++ revisions of the current design to address these issues as unfortunately it’s part of how dc++ works inorder for it to be optimized for internet use. If you really want I’m sure I can find some more info or find a source detailing the main issues we have at lans.

  4. Toast says:

    in all fairness the development is more focused on wan then lan features and thats were we want the updated clients if its just a matter of sharing stuff set up ftps that isnt hard :)

    • vortexvandel says:

      I don’t think you quite get it.. 400 to 500 people turning up for a lan.. each with an ftp. an avg of several mins to search each ftp? do the maths. it’s a big waste of time. Older dc++ gives us the right balance even if it’s a buggy/non secure application.

  5. Toast says:

    Well this post was mainly intended for Wan users…

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: