DC++ 0.866

DC++ 0.866 is out. This release fixes a serious issue that allows remote denial of service attacks (ability to freeze the client remotely by any user of the connected hubs).  Besides the hardened security, version 0.866 also improves UPnP port mapping which might fix certain issues with the automatic connectivity setup.

The details of the vulnerability will be disclosed as soon as 0.866 or any forthcoming DC++ release is marked as stable.

Why DCNF uses HTTPS via Let’s Encrypt

All DCNF web services either use HTTPS or are being transitioned to HTTPS.

The US government’s HTTPS-only standard and Google’s “Why HTTPS Matters” describe how HTTPS enables increased website privacy, security, and integrity in general. ISPs, home routers, and antivirus software have all been caught modifying HTTP traffic, for example, which HTTPS hinders. HTTPS also increases Google’s search ranking and, via HTTP/2, decreases website loading time.

Somewhat more forcefully, Chrome 56 will warn users of non-HTTPS login forms, as does Firefox 50 beta and according to schedule, will Firefox 51. This will become important, for example, for the currently-under-maintenance DCBase forums.

Beyond the obvious advantages of not costing money, Let’s Encrypt provides important reduced friction versus alternatives in automatically and therefore scalably managing certificates for multiple subdomains, as well as ameliorating certificate revocation and security-at-rest importance and thereby HTTPS management overhead by such automation allowing more shorter-lived certificates and more rapid renewal. Additionally, as crypto algorithms gain and lose favor, such quick renewals catalyze agility. These HTTPS, in general, and Let’s Encrypt, specifically, advantages have led to adopting HTTPS using Let’s Encrypt.

Setting up multiple-subdomain HTTPS with nginx, acme-tiny, and Lets Encrypt

This guide briefly describes aspects of setting up nginx and acme-tiny to automatically register and renew multiple subdomains.

acme-tiny (Debian, Ubuntu, Arch, OpenBSD, FreeBSD, and Python Package Index) provides a more verifiable and more easily customizable than the default Let’s Encrypt client. This proves especially useful in less mainstream contexts where either the main client works magically or fails magically, but tends to offer little between those two outcomes.

The first step is to create a multidomain CSR which informs Let’s Encrypt of which domains it should provide certificates for. When adding or removing subdomains, this needs to be altered:
# OpenSSL configuration to generate a new key with signing requst for a x509v3
# multidomain certificate
# openssl req -config bla.cnf -new | tee csr.pem
# or
# openssl req -config bla.cnf -new -out csr.pem
[ req ]
default_bits = 4096
default_md = sha512
default_keyfile = key.pem
prompt = no
encrypt_key = no

# base request
distinguished_name = req_distinguished_name

# extensions
req_extensions = v3_req

# distinguished_name
[ req_distinguished_name ]
countryName = "SE"
stateOrProvinceName = "Sollentuna"
organizationName = "Direct Connect Network Foundation"
commonName = "dcbase.org"

# req_extensions
[ v3_req ]
# https://www.openssl.org/docs/apps/x509v3_config.html
subjectAltName = DNS:dcbase.org,DNS:www.dcbase.org

Then, when one is satisfies with one’s changes:
openssl req -new -key domain.key -config ~/dcbase_openssl.cnf > domain.csr
in the appropriate directory to regenerate a CSR based on this configuration. One does not have to change this CSR unless the set of subdomains or other information contained within also changes. Simply renewing certificates does not require regenerating domain.csr.

Having created a CSR, one then needs to ensure Let’s Encrypt knows where to find it. The ACME protocol Let’s Encrypt uses specifies that this should be /.well-known/acme-challenge/ and per acme-tiny’s documentation:
# https://github.com/diafygi/acme-tiny#step-3-make-your-website-host-challenge-files
location /.well-known/acme-challenge/ {
alias $appropriate_challenge_location;

allow all;
log_not_found off;
access_log off;

try_files $uri =404;

Where this needs to be accessible via ordinary HTTP, port 80, to work most conveniently, even if the entire rest of the site is HTTPS-only. Furthermore, this needs to hold even for otherwise dynamically generated sites — e.g., http://build.dcbase.org/.well-known/acme-challenge/, http://builds.dcbase.org/.well-known/acme-challenge/, http://archive.dcbase.org/.well-known/acme-challenge/, and http://forum.dcbase.org/.well-known/acme-challenge/ would all need to point to that same challenge location, even if disparate PHP CMSes generate each or they ordinarily redirect to other sites (such as Google Drive).

If this works, then one sees:
Parsing account key...
Parsing CSR...
Registering account...
Already registered!
Verifying dcbase.org...
dcbase.org verified!
Verifying http://www.dcbase.org...
http://www.dcbase.org verified!
Signing certificate...
Certificate signed!

When running acme-tiny.

Once this works reliably, the whole process should be run automatically as a cron job often enough to stay ahead of Let’s Encrypt’s 90-day cycle. However, one cannot renew too often:

The main limit is Certificates per Registered Domain (20 per week). A registered domain is, generally speaking, the part of the domain you purchased from your domain name registrar. For instance, in the name http://www.example.com, the registered domain is example.com. In new.blog.example.co.uk, the registered domain is example.co.uk. We use the Public Suffix List to calculate the registered domain.

If you have a lot of subdomains, you may want to combine them into a single certificate, up to a limit of 100 Names per Certificate. Combined with the above limit, that means you can issue certificates containing up to 2,000 unique subdomains per week. A certificate with multiple names is often called a SAN certificate, or sometimes a UCC certificate.

Once Let’s Encrypt certificate renewal’s configured, Strong Ciphers for Apache, nginx and Lighttpd and BetterCrypto provide reasonable recommendations, while BetterCrypto’s Crypto Hardening guide discusses more deeply rationales behind these choices.

Finally, SSL Server Test and Analyse your HTTP response headers offer sanity checks for multiple successfully secured subdomains served by nginx over HTTPS using Let’s Encrypt certificates.

Hardening DC++ Cryptography: TLS, HTTPS, and KEYP

BEAST, CRIME, BREACH, and Lucky 13 together left DC++ with no secure TLS support. Since then, the triple handshake attack, Heartbleed, POODLE for both SSL 3 and TLS, FREAK, and Logjam have multiplied hazards.

Fortunately, in the intervening year and a half, in response:

  • poy introduces direct, encrypted private messages in DC++ 0.830.
  • DC++ 0.840 sees substantial, wide-ranging improvements in KEYP and HTTPS support from Crise, anticipating Google sunsetting SHA1 by several months and detecting man-in-the-middle attempts across both KEYP and HTTPS.
  • OpenSSL 1.0.1g, included in DC++ 0.842, fixes Heartbleed.
  • DC++ 0.850 avoids CRIME and BREACH by disabling TLS compression; avoids RC4 vulnerabilities by removing support for RC4; prevents BEAST by supporting TLS 1.1 and 1.2; mitigates Lucky 13 through preferring AES-GCM ciphersuites; removes support for increasingly factorable 512-bit and 1024-bit DH and RSA ephemeral TLS keys; and with all but one ciphersuite, AES128-SHA, deprecated and included for DC++ pre-0.850 compatibility, uses either DHE or ECDHE ciphersuites to provide perfect forward secrecy, mitigating any future Heartbleed-like vulnerabilities.
  • DC++ 0.851 uses a new OpenSSL 1.0.2 API to constrain allowed elliptic curves to those for which OpenSSL provides constant-time assembly code to avoid timing side-channel attacks.

These KEYP, TLS, and HTTPS improvements have not only fixed known weaknesses, but prevent DC++ 0.850 and 0.851 from ever having been vulnerable to either FREAK or Logjam. As with perfect forward secrecy, these changes increase DC++’s ongoing security against yet-unknown cryptographic developments.

The upcoming version switches URLs in documentation, in menu items, and of the GeoIP downloads from HTTP to HTTPS. While these changes do not and cannot prevent attacks perfectly, it should now provide users with improved and still-improving cryptographic security for the benefit of all DC++ users.

BEAST, CRIME, BREACH, and Lucky 13: Assessing TLS in ADCS

1. Summary

Several TLS attacks since 2011 impel a reassessment of the security of ADC’s usage of TLS to form ADCS. While the specific attacks tend not to be trivially replicated in a DC client as opposed to a web browser, remaining conservative with respect to security remains useful, the issues they exploit could cause problems regardless, and ADCS’s best response thus becomes to deprecate SSL 3.0 and TLS 1.0. Ideally, one should use TLS 1.2 with AES-GCM. Failing that, ensuring that TLS 1.1 runs and chooses AES-based ciphersuite works adequately.

2. HTTP-over-TLS Attacks

BEAST renders practical Rogaway’s 2002 attack on the security of CBC ciphersuites in SSL/TLS by using an SSL/TLS server’s CBC padding MAC acceptance/rejection as a timing oracle. Asking whether each possible byte in each position results in successful MAC, it decodes an entire message. One can avert BEAST either by avoiding CBC in lieu of RC4 or updating to TLS 1.1 or 1.2, which mitigate the timing oracle and generate new random IVs to undermine BEAST’s sequential attack.

CRIME and BREACH build on a 2002 compression and information leakage of plaintext-based attack. CRIME “requires on average 6 requests to decrypt 1 cookie byte” and, like BEAST, recognizes DEFLATE’s smaller output when it has found a pre-existing copy of the correct plaintext in its dictionary. Unlike BEAST, CRIME and BREACH depend not on TLS version or CBC versus RC4 ciphersuites but merely compression. Disabling HTTP and TLS compression therefore avoids CRIME and BREACH.

One backwards-compatible solution thus far involves avoiding compression due to CRIME/BREACH and avoiding BEAST with RC4-based TLS ciphersuites. However, a new attack against RC4 in TLS by AlFardan, Bernstein, et al exploits double-byte ciphertext biases to reconstruct messages using approximately 229 ciphertexts; as few as 225 achieve a 60+% recovery rate. RC4-based ciphersuites decreasingly inspire confidence as a backwards-compatible yet secure approach to TLS, enough that the IETF circulates an RFC draft prohibiting RC4 ciphersuites.

Thus far treating DC as sufficiently HTTP-like to borrow their threat model, options narrow to TLS 1.1 or TLS 1.2 with an AES-derived ciphersuite. One needs still beware: Lucky 13 weakens even TLS 1.1 and TLS 1.2 AES-CBC ciphers, leaving between it and the RC4 attack no unscathed TLS 1.1 configuration. Instead, AlFardan and Paterson recommend to “switch to using AEAD ciphersuites, such as AES-GCM” and/or “modify TLS’s CBC-mode decryption procedure so as to remove the timing side channel”. They observe that each major TLS library has addressed the latter point, so that AES-CBC might remain somewhat secure; certainly superior to RC4.

3. ADC-over-TLS-specific Concerns

ADCS clients’ and hubs’ vulnerability profiles and relevant threat models regarding each of BEAST, CRIME, BREACH, Lucky 13, and the RC4 break differ from that of a web browser using HTTP. BEAST and AlFardan, Bernstein, et al’s RC4 attack both point to adopting TLS 1.1, a ubiquitously supportable requirement worth satisfying regardless. OpenSSL, NSS, GnuTLS, PolarSSL, CyaSSL, MatrixSSL, BouncyCastle, and Oracle’s standard Java crypto library have all already “addressedLucky 13.

ADCS doesn’t use TLS compression, so that aspect of CRIME/BREACH does not apply. The ZLIB extension does operate analogously to HTTP compression. Indeed, the BREACH authors remark that:

there is nothing particularly special about HTTP and TLS in this side-channel. Any time an attacker has the ability to inject their own payload into plaintext that is compressed, the potential for a CRIME-like attack is there. There are many widely used protocols that use the composition of encryption with compression; it is likely that other instances of this vulnerability exist.

ADCS provides an attacker this capability via logging onto a hub and sending CTMs and B, D, and E-type messages. Weaponizing it, however, operates better when these injected payloads can discover cookie-like repeated secrets, which ADC lacks. GPA and PAS operate via a challenge-reponse system. CTM cookies find use at most once. Private IDs would presumably have left a client-hub connection’s compression dictionary by the time an attack might otherwise succeed and don’t appear in client-client connections. While a detailed analysis of the extent of practical feasibility remains wanting, I’m skeptical CRIME and BREACH much threaten ADCS.

4. Mitigation and Prevention in ADCS

Regardless, some of these attacks could be avoided entirely with specification updates incurring no ongoing cost and hindering implenetation on no common platforms. Three distinct categories emerge: BEAST and Lucky 13 attacks CBC in TLS; the RC4 break, well, attacks RC4; and CRIME and BREACH attack compression. Since one shouldn’t use RC4 regardless, that leaves AES-CBC attacks and compression attacks.

Disabling compression might incur substantial bandwidth cost for little thus-far demonstrated security benefit, so although ZLIB implementors should remain aware of CRIME and BREACH, continued usage seems unproblematic.

Separately, BEAST and Lucky 13 point to requiring TLS 1.1 and, following draft IETF recomendations for secure use of TLS and DTLS, preferring TLS 1.2 with the TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 or other AES-GCM ciphersuite if supported by both endpoints. cryptlib, CyaSSL, GnuTLS, MatrixSSL, NSS, OpenSSL, PolarSSL, SChannel, and JSSE support both TLS 1.1 and TLS 1.2 and all but Java’s supports AES-GCM.

Suggested responses:

  • Consider how to communicate to ZLIB implementors the hazards and threat model, however minor, presented by CRIME and BREACH.
  • Formally deprecate SSL 3.0 and TLS 1.0 in the ADCS extension specification.
  • Discover which TLS versions and features clients (DC++ and variations, ncdc, Jucy, etc) and hubs (ADCH++, uHub, etc) support. If they use standard libraries, they probably all (except Jucy) already support TLS 1.2 with AES-GCM depending on how they configure their TLS libraries. Depending on results, one might already safely simply disable SSL 3.0 and TLS 1.0 in each such client and hub and prioritize TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 or a similar ciphersuite so that it finds use when mutually available. If this proves possible, the the ADCS extension specification should be updated to reflect this.

DC++ 0.825

A new security & stability update of DC++ is released today. There are no new features this time; the update fixes a couple of severe security vulnerabilities discovered since the release of  version 0.822. The following problems were fixed:

  • The client can crash in case of multiple partial file list uploads, requested at the same time or shortly one after the other. This problem hits the previous two releases (versions 0.820 & 0.822).
  • The originator of some type of ADC protocol messages aren’t correctly verified. This allows a malicious client to block outgoing connections of other users logged into an ADC hub by sending commands to be accepted from the hub only. This problem exists in all earlier versions of DC++ and the solution needs fixes in various ADC hubsoftware as well. More detailed description of this vulnerability can be found in the original bug report.

Due to the nature of these bugs an immediate upgrade is recommended.

The road ahead: Security and Integrity

The community we are part of has had its fair share of security threats. The security threats have originated from software bugs, protocol issues, malicious users and even from the developers of the network.

Security and integrity are very broad terms and my use for them is indeed broad, as I believe they address multiple points and need not necessary simply be about remotely crashing another user. A system’s security and integrity are tightly coupled and may sometimes overlap.

There is a variety in the number of issues we face.

Issue 1: Software issues
Writing software is hard. Really hard. It’s even more difficult when you also include the possibility for others to impact your system (client/hub etc): chat messages, file sharing exchange etc. Direct Connect hinges upon the ability to exchange information with others, so we cannot simply shut down that ability.

A software issue or bug arise differently depending on what type of issue we’re talking about.

The most typical bug is that someone simply miswrote code, “oops, it was supposed to be a 1 instead of a 0 here”.

The more difficult bugs to catch — and consequently fix — are design issues, which can be caused by a fundamental use of a component or the application’s infrastructure, “oops, we were using an algorithm or library that has fundamental issues”.

A security issue may stem from an actual feature — for instance the ability to double click magnet links. That is, the bug itself is that the software is not resilient enough for a potential attack. That is, there’s nothing wrong with the code itself, it simply isn’t built to withstand a malicious user. (Note: This is not a reference to the magnet links, they were simply an example.)

A software bug may not only cause malicious users or (other) software to exploit the system, they may also cause the integrity of content the crumble. For instance, pre-hashing, the ability to match different files to each other were done via reported name and file size. This was ultimately flawed as there was no way of identifying that the two files were identical, beyond the name and size, both of which can be easily faked.

A software issue may be addressed by simply blocking functionality (e.g., redirects to certain addresses, stopping parsing after X character etc). While this is the simplest course of action, removing functionality is often not what users want.

Issue 2: Protocol issue or deficiencies
Systems and protocols that allow users to perform certain actions carry with them a set of potential security issues. The problem with writing a protocol is that other people need to follow it: the developers of a piece of software may not be the same as the developers for the protocols. For Direct Connect, there’s a very close relationship between the two groups (it’s actually closer to one group at the time of writing), so this issue may not be that severe. However, there will always be a discrepancy between the maintainers of the protocol and software. Imagine the scenario where the developers for a software suddenly disappear (or are otherwise not continuing updates). The developers for the protocol cannot do anything to actually address issues. In the reverse situation, the software developers can simply decide for themselves (effectively creating their own ‘protocol group’) that things need to be updated and do so.

Any protocol issue is hard to fix, as you must depend on multiple implementations to manage the issue correctly. The protocol should also, as best as it can, provide backwards compatibility between its various versions and extensions. Any security issue that comes in between can greatly affect the situation.

A protocol issue may also simply be that there’s not enough information as to what has happened. For example, the previous DDoS attacks were possible to (continue to) do as there weren’t an ability for the protocol to inform other clients and hubs (and web servers etc) what was happening.

The original NMDC had no hashes and as such no integrity verification for files. This was a fundamental issue with the protocol and extensions were provided later on to manage the (then) new file hashing. This wasn’t so much a bug in the protocol, it was simply that it was a feature NMDC’s founder hadn’t thought of.

When software is told to interact in a certain way according to the protocol, then those actions are by effect the protocol’s doing. For example, the (potential) use of regular expressions for searches are not a problem for the protocol itself: the specification for regular expressions in ADC is quite sparse and very simple. However, the problem with regular expressions is that they’re expensive to do and any client that will implement that functionality effectively will open themselves up to a world of hurt if people are malicious enough. While the functionality lies in the software’s management of the feature, it is the protocol that mandates the use of it. (Note: In ADC, regular expressions are considered an extension. Any extension is up to the developers to implement if they so choose. That is, there is no requirement that clients implement regular expressions. However, those that do implement the functionality of the regular expressions, are bound by the protocol when they announce so.)

Issue 3: Infrastructure
The infrastructure of the system must withstand security threats and issues.

If a hosting service would go down for a particular software, then that software cannot make updates responding to upcoming issue. Official development simply stops at that point on that service (and the developers need to find another route).

If a hosting service decide to remove old versions (say, because it had a 2 year pruning of software or for legal matters) then someone need to keep backups of the information.

A large part in the DC infrastructure is the ability to connect to the available hublists. This issue was apparent a few years ago when the major hublists were offline while various software didn’t update. People simply couldn’t connect to hubs, and for beginners this is even more annoying. There are now various mitigation approaches to handle these scenarios, such as local caching, proxy/cloud caching and even protocol suggestions to handle these scenarios and distribution avenues.

Infrastructure isn’t simply being able to download software and connect to a hublist, it is also the ability to report bugs, request features and get support for your existing software and resources.

A very difficult problem with infrastructure is that it is often very costly (money) (for the developers) to set up. Not only that, it must be properly done, which is also costly (time) and hard. Moreover, most people aren’t experts at setting up resources of this kind, and there is lots of information available online for avenues of attacks against forums and websites.

Infrastructure issues can be aided by moving some services out in a distributed manner (whilst a set of people maintain the resources) and moving some services out to the users in a distributed manner (for example, allowing clients to automatically exchange hub lists). Obviously, the services must be there from the start, otherwise there’s little one can do.

Issue 4: People

Software, infrastructure and our ideas only last so far. If a person has the means and intent, they can cause various problems for the rest of the community. Most of the time, we envision a person trying to cause havoc using a bug in the system (or equivalent) but that is not the only concern we have when it comes to people and their interactions.

While a person with the know-how and the tools can cause a tremendous problem, the people that can cause the most problem are those who control key resources within the system. For example, a hub operator may cause problems in a hub by kicking and banning people. But the hub owner can do much more than that, since they control the very resource that people are using.

That means the developers and owners of each resource must guard themselves against others who they share that control with. This is primarily a problem when the two (or more) people who share a resource disagree with an issue, and one party decide that they want to shut down that resource. The last instance of this was last year and was with ADCPortal and other similar problems have occurred in the past.

The problem with this is that we all need to put trust in others. If we don’t, we can’t share anything and the community crumbles. A problem with resource ownership and control is a general problem of responsibility: if I own a resource (or have enough control over it), I am expected to continue developing it and nurturing that resource. If I do nothing as a response to security issues (and any other issue) then that resource eventually needs to be switched out.

The solution is to share resources in such a way that allow people to contribute as much as possible. The community should encourage those who are open about content, and try and move away from a “one person control everything” system. This is extra difficult and puts the pressure on all of us.

The road ahead

Security cannot be obtained by not addressing the problems we face. The community gain very little by obfuscating the ‘when’ and ‘how’ when it comes to security: it only slows down any malicious party so much by not being open about the security issues we face.

Disclosure of security issues is an interesting aspect and the developers owe it to the community to be as direct as possible. It does not help if we wait one day, one week or one year to inform people, anyone vigilant enough will discover problems regardless when and how we announce them. Any announcement (or note in a changelog or other way of information) shouldn’t cause people to treat the messenger badly. Instead, the key is to have an open dialog between developers, hub owners, users and anyone else involved in the community. The higher the severity of the security issue, the more reason to treat any potential forthcoming issue directly and swiftly. I believe it would also be good if someone reviewed past security issues and put them together in a similar article or document, essentially allowing current and future developers to see problems that have been encountered and hopefully how they were solved (this has been done to a certain extent). Discussing security issues with security experts from various companies may also be a way forward.

The community must be active in security and integrity issues. A common phrase for development is to be “liberal what you accept and conservative what you send out”. This applies to both software and protocol development.

Software should have clear boundaries where input from another user or client can cause an impact.

Protocols should be reactive in what hashing methods, algorithms and general security it uses. The new SHA-3 standard is interesting in this aspect, and it would be good if we would switch to something that provide a higher security or integrity for us. Direct Connect has gone from a clear-text system to a secure-connection system (via TLS and keyprints). The system could further be extended with the use of Tor or other anonymous services, to provide that anonymity that other systems have.

The security of our system shouldn’t depend on “security by obscurity”; before DC++ added an IP column to its user list, people (incorrectly) believed that their IP was “secret”. The security of our system shouldn’t depend on obfuscating security issues, since they’ll only hit us even harder in the future. There are other cases where the normal user doesn’t know enough security aspects. For example when people disclosed how you could as a hub owner sniff out all data from their hub and their users’ interactions. While I strongly believe it’s difficult to educate your users (on any topic, really), you shouldn’t lie to them. Provide instead ample evidence and reassurance that the information is treated with care and that you as developers and fellow user consider security an important point.

Security is tricky because it may sometimes seem like there’s a security issue when there’s in fact not. This makes it important for us to investigate issues and not rush for a solution. It is also important that people don’t panic and go around yelling “security problem!”, as if there’s no tomorrow (I’ve been the source of such a scare, I’ll admit). Equally important is that those who knows more about security should be the decider of protocol and software aspects, as the topic shouldn’t be subject to whimsical changes “because it makes no sense, right?” (I’ll once again, unfortunately, admit of being the cause of such an issue, regarding ADC — but hopefully will be rectified soon-ish).

The road ahead is to investigate security issues in a timely but proper manner, be pro-active and be up front with problems. Time should be spent to investigate a component’s weaknesses and that component should then be discarded if the hurdles are too difficult to overcome.