My Apache mod_remoteip patch to support passing Host/Port/Protocol headers from proxies have been updated.
Read about the patch here: Apache: extending mod_remoteip to support Host/Port/Protocol mangling natively
2019-10-18: important bugfix: fixed unnoticed bug with RemoteProtoHeader option compatibility with keepalive connections (Keepalive on option enabled), it was causing all remote IP options to not work on subsequent requests inside single connection if HTTPS 'emulation' was enabled by the specific header. Thanks go to PASCAL for the bug report and providing a good clear test case for it. This version is tested for compatibility with Apache up to version 2.4.41 and should not bring any new issues if run with Apache 2.4.41 in production.
Quick download: httpd-2.4-remoteip-rpaf-2.4.41.patch
Okay. So CentOS 8 is out. Building first few development platforms and few database servers out of it last days, there is a little bit to share.
First of all, the kernel is actually faster. It's newer (4.18+) and contain many later optimization that CentOS 7 has, so it's no wonder. I assume it to be on par with UEK5 (4.14+) or even faster at that. AMD support for my home PC is also good despite it complaining about my Ryzen CPU not being extensively tested, but it behaves better than CentOS 7 under Hyper-V, at least I do see a slight improvement in compilation and response times. This is of course most probably due to GCC being newer as well, but a distinctive part of the RPM package building process is not the compilation itself, that goes faster as well. May be related to rpmbuild optimizations though. But I also see some performance improvements on newer MariaDB servers I built on 8 recently, so it's not only GCC and rpmbuild that matter.
Phoronix posted some preliminary tests of the newer system compared to earlier, and it seems that indeed kernel version makes a big difference there: https://www.phoronix.com/scan.php?page=article&item=centos-8-benchmarks&num=2 . I'm pretty set on using CentOS 8 as a base platform from now on.
I also like the base system memory footprint after cleaning the system up. In CentOS 7, it was hard to untie multiple dependencies when cleaning up the system. It had avahi running, it had tuned almost mandatory and so on, resulting in slightly below 200M footprint in most cases. In CentOS 8, and that surprised me first, actually, I managed to remove tuned (which I don't like and don't use) and polkit, and there was no avahi installed by default. After cleaning this and a bit more of stuff, the clean minimalistic system memory footprint was ONLY 80MB. That came as another good surprise. So with CentOS 8 we can actually build tiny, smaller and tidier systems, without putting in too much effort. And yeah, it even boots faster after all the cleanups.
There are also few spoons of tar though in this big honey barrel. First of all, not all development packages (-devel) are present in CentOS 8 repositories. When I started building hosting system patched httpd 2.4.41 and PHP 7.3 package as the first test, I had around eight missing -devel packages on my hands. It boiled down to downloading source RPMs (luckily they were there), building them (fixing .specs for few along the way) and then throwing everything besides -devel away. Self-built -devels install well, and are going okay with the native binary packages. In the end, I was able to build both httpd and php specific versions without having many issues.
What comforts me, is that there are backwards compatibility iptables and network scripts packages included. I was terribly afraid they ditched it in favor of nft and NetworkManager, but I was wrong. Classic network scripts are written to complain about deprecation, but well, who cares (did not even remove the message). They still do their job well without much extra bloatware, and that's all I need around. As it occured, actually some CentOS 7 packages can also be installed into 8 without scratch because major compat dependencies are present, but anyways I heavily recommend against it.
The thing to be wary of is new DNF package manager package removal behavior. By default, when you remove some package having dependencies, it removes all orphan dependencies you have not installed by hand. So imagine you have installed rpm-application-1 that requires rpm-library-2 (which was installed as dependency), and then if you remove rpm-application-1, rpm-library-2 will also be removed if not used by any of the remaining packages. While this looks handy on the first glance, it resulted in my system losing few packages I used like tar when I removed some bloat. So the thing to keep in mind: if you want some package to remain after you remove all other packages dependent on it, yum install it manually and first.
Another thing that grazed me is incompleteness of not only base but third party repositories as well: MariaDB (which has packages actually present, but absent in metadata), EPEL, etc. Seems like everyone in the world was terribly hurried with this one release, and number of launch mistakes here and there is pretty high due to that. Because of that, I'll i.e. build my own packages of some new and older EPEL stuff and won't even bother with adding EPEL to the repo list as I did with 7. During the launchtime (and later on as well, but less pressing), the less third party repositories are there, the less problems are to be encountered.
So, what all this means. This means CentOS 8 is not without issues, but is totally usable, so I'll be posting hosting system packages in my repository for CentOS 8 soon enough. CentOS 7 packages will continue to be there for a good while though.
The userbenchmark.com site, pretty popular to compare relative CPU performance, have made a significant change, rendering their overall CPU 'relative index' meaningless to compare CPUs that have more than 4 cores.
Basically, what they done is shifting multi-core performance tests significance in their overall rating from 10% to 2%, basically up to or even below the error margin, given the test results distribution. This means full multi-core performance of the specific CPU is now not reflected in their rating properly (if not to say at all) and only single to quad core performance matters.
As more and more general consumer CPUs are starting to have more than 4 cores during last years, this decision can be considered ridiculous. Even more so with CPUs that have 12+ cores, because their multicore performance is tremendous. Seeing they are not much faster than 4 core CPUs in the general rating is not much helpful. Even worse, people can even think like 6-core or 8-core CPUs could be faster looking at rating assembled that way and make a totally wrong decision on their purchases.
So, if using userbenchmark.com earlier to compare different stuff you have or not or planning to have, it's now better to either look at detailed test results instead of using the 'calculated' rating or to resort to other benchmarking points around there.
It's a pity though. Userbenchmark offered pretty good 'quick' rating that actually reflected the situation in more or less unique way suitable for both browser-gaming users and even power users. Now it's offering solely browser and low end gaming oriented rating that cannot be trusted in terms of general CPU comparison.
Everything written in this blog is solely a personal opinion or experience. You don't have to agree with it, more than that: always take it with a good grain of salt.
YUM repository is now HTTPS-enabled and also contains .repo file to easily attach the repository to your server yum installation.
Almost all software packages (both binary and source ones) also were updated with their current respective versions.
1. It is overpriced for its performance - 70% difference in cost, 20% difference in performance. You can find a lot of benchmarks out there, look at most application benchmarks because you will never be using full CPU performance in anything but heavy enough applications.
2. They have a whole bloody lot of critical vulnerabilities specific to them recently discovered, that when avoided ('fixed up') by CPU firmware and OS, drop the performance quite a lot
3. They change motherboard sockets every now and then on a whim, so further upgrade would not be easy
4. The good motherboards and chipsets for these are overpriced as well, everything else is just meek
5. They are technically inferior, manufactured at 14 nm process, while current AMD CPUs are at 12 nm and next generation coming soon would be at 7 nm
6. They drop frequencies of the whole CPU under heavy AVX vector calculational loads - https://blog.cloudflare.com/on-the-dangers-of-intels-frequency-scaling/
7. They have a stable huge userbase of those who overclock, meaning buying such CPU second hand can result in getting a 'nicely grilled' one
8. Recently, there were certain reportings that declared TDP (thermal dissipation, which stands almost equal to how hot the CPU is) for the top class desktop CPUs may actually not be real - https://www.anandtech.com/show/13591/the-intel-core-i9-9900k-at-95w-fixing-the-power-for-sff
9. If you prefer just an integrated videocard in your laptop or even desktop, the performance of integrated GPUs is totally terrible compared to AMD - look at the gaming benchmarks here, because for office productivity (except for some specific case of 3D design, modeling and some other specific jobs) GPU mostly does not matter.
10. They usually have thermal glue under the lid instead of solder (although some AMD CPUs have this flaw as well, and some Intel CPUs don't), leading to bigger operating temperatures (so more noise from your CPU and case fans)
Everything written above is just a personal opinion, you should think on it twice and don't take any things from it at the face value too willingly.