Alexander Nasonov's shared items

Nothing special

Thursday, October 02, 2014

What two numbers come next?

This is my daughter's homework. While it's good for acquiring pattern matching skills,  schools in England don't teach 7yo children that such sequences are called an arithmetic progression, and more importantly, they don't teach that there are always multiple patterns (I suspect teachers don't want to make their lives any harder by having to accept multiple answers in tests).
In the era of internet, you can find specialised search engines for many things. Sequences aren't an exception. There are 250,000 sequences in OEIS database.
Let's try to find some interesting elementary sequences containing 4,8,12.
The most obvious sequence is Multiples of 4. Then, there is Fibonacci sequence beginning 0, 4. Less trivial are Product of decimal digits of n with n=41,42,43 (though its next two numbers are no different from the arithmetic progression) or Alternately add and multiply.
I explained first three to my daughter but she liked her answer (the first one) more. Then she got bored and switched to reading a book.
For older children, there are nice sequences Floor(n^2/2)Numbers n such that 9*n+1 is prime or my favourite The middle member 'b' of the Pythagorean triples (a,b,c) ordered by increasing c (and similar sequences A009012 and A009023).
It's time to stop giving sequences for homework because smart kids with internet can easily make their teacher feel dumb. Or even bring the teacher to tears with Number of divisors of Catalan number A000108(n)

Saturday, March 24, 2012

Two opinions on ropes

http://thread.gmane.org/gmane.comp.lang.lua.general/89440/focus=89529

In general, I agree with Mike but may be Josh is right in this particular case?

Friday, March 09, 2012

Popularity of boost versus popularity of pcre

Recently, there were two recursive PKGREVISION bumps in pkgsrc. One was after an update of devel/boost and the other was after an update of devel/pcre (Perl Compatible Regular Expressions). 59 files were bumped after boost update and 1657 + 68 files were bumped after pcre update. Even though boost has its own regex library but  it's way less popular in open-source world than pcre. You can draw your conclusions.

Wednesday, November 02, 2011

More multi-core improvements

More multi-core improvements:

Matthew Dillon wrote up an explanation of how performance on systems with a lot of CPU cores has been significantly improved – up to 300%! (He says 200%, but I think he’s treating it as a percentage of a whole rather than percent changed.) Apparently finally getting rid of lock contention is the trick.

Tuesday, November 01, 2011

Test 2

Linux performance improvements:

Two years ago I wrote an article presenting some Linux performance improvements. These performance improvements are still valid, but it is time to talk about some new improvements available. As I am using Debian now, I will focus on that distribution, but you should be able to easily implement these things on other distributions too. Some of these improvements are best suited for desktop systems, other for server systems and some are useful for both.

System update

First of all, it is important that your system is up to date. Update it to Debian testing if you have not done that yet. It will give your, amongst others:

  • Updated eglibc 2.13, which includes functions optimized for instruction sets SSE2, SSSE3 and SSE4.2 provided by recent processors
  • Updated GCC 4.5 (with GCC 4.6 being on the way)
  • Updated graphics stack with Xserver 1.10, Mesa 7.10.2 and new X drivers and up to date pixman and cairo libraries, all improving performance
  • A recent kernel which brings improvements to process and disk schedulers, better hardware drivers, transparant hugepages (see further), scalability improvements to the EXT4 and XFS file systems and the Virtual File System layer, vhost-net for reduced network latency for KVM virtual machines and more. Debian testing has a 2.6.38 kernel, while 2.6.39 is available in unstable and will migrate to testing in the near future.
  • Parts of GNOME 2.32, such as Evolution which has improved start-up performance and important bug fixes (for example support of mailboxes larger than 2GB)
  • Iceweasel 4 is available in Debian Experimental and the upcoming 5 version, bringing even more performance improvements, is already available in an external repository.

Transparant hugepages

Transparant hugepages is a feature introduced in Linux 2.6.38 which can improve performance of applications. The processor has a translation lookaside buffer (TLB) which is a CPU cache used to speed up mapping of virtual memory addresses to physical memory addresses. This TLB has a limited size. By transparently combining several small 4 KB pages to larger “hugepages”, more pages can fit into the TLB. Transparent hugepages can be enabled on the fly, however it will only have effect on applications started after you have enabled this feature. For this reason, it is best to activate it right from the start by using a kernel boot parameter. With transparent_hugepage=always, the kernel will use transparant hugepages for any process. If you want to use transparent hugepages only for applications which explicitly indicate that they prefer hugepages, you can use transparent_hugepage=madvise. You have to add one of these boot parameters to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub in Debian, and run the update-grub command. At the next boot, you can take a look at the contents of the /sys/kernel/mm/transparent_hugepage file to verify that it is really enabled.

More information:

Tuning ondemand cpufreq governor

The ondemand cpufreq governor (which should be used on most systems by default; make sure you have Debbian’s cpufrequtils package installed) tends to switch back to slower CPU frequency speeds a bit too early in some cases, hurting performance. By setting the sampling_down_factor to a value higher than 1, you can prevent it from reducing the clock speed too quickly.

I have added this to my /etc/rc.local script: if test -f /sys/devices/system/cpu/cpufreq/ondemand/sampling_down_factor then echo 10 > /sys/devices/system/cpu/cpufreq/ondemand/sampling_down_factor fi

On server systems, I even use 100 instead of 10.

More information:

VM dirty ratio

By default the VM dirty ratio is set to 20% and the background dirty ratio is set to 10%. This means that when 10% of memory is filled with dirty pages (cached data which has to be flushed to disk), the kernel will start writing out the data to disk into the background, without interrupting processes. If the amount of dirty pages raises up to 20%, processes will be forced to write out data to disk and cannot continue other work before they have done so.

By increasing these values, more data will be cached into memory, and data will be written out in larger I/O streams. This can be a good thing on servers with fast storage and lots of memory.

To increase these values, create a file /etc/sysctl.d/dirty_ratio.conf with these contents: vm.dirty_ratio = 40 vm.dirty_background_ratio = 15

Then with the command <code>sysctl -p /etc/sysctl.d/dirty_ratio.conf</code> you make these settings become in effect immediately.

On desktop systems, the default dirty_ratio of 20 and dirty_background_ratio of 10 should be reasonable. You do not want a too high dirty_ratio on desktop systems, because applications will stall for too long if they have to write out all these dirty pages at once.

More information:

CFS scheduler tuning

CFS (Competely Fair Scheduler) is the name of the Linux process scheduler. By default it is tuned for desktop workloads. For server systems where throughput is more important than latency, Red Hat’s tuned package proposes these sysctl settings for CFS for servers: kernel.sched_min_granularity_ns = 10000000 kernel.sched_wakeup_granularity_ns = 15000000

More information:

ulatencyd

ulatencyd is a daemon which uses cgroups to give hints to the kernel’s process scheduler CFS to improve desktop latency and make applications feel more responsive. It will prevent individual applications from hogging the system, slowing down other applications . This is somewhat simpler than the much hyped (but controversial) autogroup kernel patch but this solution is much more extensive in that ulatencyd knows different applications and desktops and knows how to configure the scheduler to improve responsiveness.

On Debian, you install the ulatencyd package and start the ulatencyd init script. The ulatency package contains the ulatency client too, which shows you the cgroups ulatencyd has set up.

In my opinion, ulatencyd is great for desktop systems but I do not recommend to install this on server systems.

More information:

KVM performance improvements

When using Linux guests in kvm virtual machines, it is important to configure the network interface and hard drive as virtio devices in order to experience the best performance. KVM also benefits from transparant hugepages: be sure to enable them both in the host as in the guest machine.

More information:

VHostNet

VHostNet improves network latency and throughput of KVM guests. To enable it, you need to load the vhost_net kernel module in your host system. In Debian you can add vhost_net to /etc/modules to load it automatically when booting the system. Then if you use libvirt to manage your virtual machines, VHostNet will be used automatically when starting virtual machines. If you start qemu-kvm by hand, you will need to add the vhost=on option to the netdev device.

More information:

Raw devices, disable cache and choose deadline I/O scheduler

For best performance, raw devices are recommended instead of qcow2 or other image files. In libvirt/virt-manager I have defined a storage pool on an LVM volume group, and let virt-install create logical volumes on it containing raw images.

It is recommended to disable I/O caching in KVM because it reduces data copies and bus traffic. In the libvirt XML file for your virtual machine, set the cache=’none’ attribute for the driver tag for the disk device. You can also use virt-manager to make this change: look for the cache mode under the advanced options for the disk.

Benchmarks seems to indicate that it is best to to use the deadline I/O scheduler instead of the default CFQ scheduler. Using deadline in the guest seems also beneficial. To make deadline the default scheduler, edit /etc/default/grub.conf and add elevator=deadline to the GRUB_CMDLINE_LINUX_DEFAULT variable.

More information:

Native AIO

Native AIO offers better performance than the thread based AIO in KVM. However, it should only be enabled on raw images, because it can lead to disk corruption in some cases otherwise. This problem is supposed to be fixed in recent kernels according to information I got on the #kvm IRC channel, but better be safe than sorry.

To enable this, add the parameter io=’native’ to the driver tag of the disk in the XML file for the virtual machine in libvirt.

CPU features

By default, KVM only provides a common limited set of CPU instructions implemented by different CPU’s from Intel and AMD. This is needed to permit live migration of a virtual machine to hardware with a CPU which does not implement all instructions available in the original system. If you do not plan on doing doing that, you can enable all instruction sets of your host CPU in the virtual machines, so that your virtual machine can make use of all advanced features of your CPU (for example SSE3 and others). The easiest way to do this, is by using virt-manager. Click on “Copy host configuration” in the Processor – Configuration settings of the virtual machine. The next time you start up the virtual machine, it will have access to all extended instruction sets of your CPU.

KSM

Kernel Samepage Merging is a kernel feature which merges identical pages in memory. If you are using different virtual machines, with the same operating system and applications running in it, lots of memory pages will actually be identical. KSM will save memory by merging the identical pages.

To enable this on Debian, I have put this in my /etc/rc.local script: echo 1 > /sys/kernel/mm/ksm/run echo 200 > /sys/kernel/mm/ksm/sleep_millisecs

The last line is optional. It raises the interval during two different memory scans, so that the CPU is not too busy scanning for duplicate memory pages all the time.

More information:

zram disk

If you do not have much RAM available in your system, it is useful to compress part of the data in memory. This can be done by using a zram disk, which is a ram disk on which all data is transparently compressed. On this zram disk you create a swap partition which you give a higher priority than the normal on disk swap space. Once the available RAM (total RAM – RAM reserved for zram disk) is used, data will be swapped out to the zram disk, which is much faster than swap space on a rotating hard disk. This way, more data fits into the RAM.

On my 1 GB netbook system which runs a full GNOME desktop, I have reserved 512 MB for the zram disk. To do so, I added the following in /etc/rc.local:

echo $((512*1024*1024)) > /sys/block/zram0/disksize mkswap /dev/zram0 swapon -p 60 /dev/zram0

Of course, a better solution is to add RAM to your system, especially on server systems.

TEst

Saturday, August 27, 2011

throw AlertException::getInstance();

Someone recently recommended me disruptor. It's Java but there is a port to C++ called distruptor-cpp.
The guys use several boost libraries but the code looks like Java. This "pattern" caught by eyes:

159            throw AlertException::getInstance();

They obviously optimize a memory allocation:

47  /** Singleton pattern to avoid memory allocation in an exception */
48  static AlertException * ALERT_EXCEPTION;
49  static boost::signals2::mutex mutex;


Guess why they need a mutex here? To prevent a creation of multiple instances of ALERT_EXCEPTION in AlertException::getInstance().

Friday, February 11, 2011

BIND 10

Holy shit, BIND 10 uses boost and python!

Wednesday, December 08, 2010

bin/44188: gcov sucks

Nice bug report:

>Number:         44188
>Category:       bin
>Synopsis:       gcov sucks
>Confidential:   no
>Severity:       critical
>Priority:       low
>Responsible:    bin-bug-people
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Fri Dec 03 08:00:00 +0000 2010
>Originator:     David A. Holland
>Release:        NetBSD 5.99.41 (20101130)
>Organization:
>Environment:
System: NetBSD tanaqui 5.99.41 NetBSD 5.99.41 (TANAQUI) #32: Wed Dec 1 01:20:02 EST 2010 dholland@tanaqui:/usr/src/sys/arch/i386/compile/TANAQUI i386
Architecture: i386
Machine: i386
>Description:

gcov is primitive and really painful to use for trying to handle coverage of a real project.

The really serious problem is that there's no way to tell gcov that certain lines of code are expected to not be reached. Any real project has some and perhaps many of these, whether they're genuinely meant to be unreached (e.g. lines that contain "assert(0)") or they're currently unreached because of limitations in your test apparatus and you want to shut gcov up about them so you can concentrate on the stuff that's tractable.

It also only has rudimentary support for coping with basic blocks that are less than a full line of code. This often arises with expressions that contain && and ||... particularly assert expressions... and using this support makes the preceding problem markedly worse. This problem also arises if you have large macros; the invocation of the macro functions as one line with perhaps many basic blocks.

Other problems and annoyances include:

   - It does not seem to be able to cope with inline functions at all;
     the counts always come out zero.

   - Running a gcov-enabled binary makes a mess because it leaves
     dump files in your source tree.

   - Then, to run gcov you have to then hunt all these dump files
     down, and when you do it rewards you by leaving more files in
     your source tree.

   - The logging code in gcov-enabled programs is apparently not
     multiprocessor-safe; if you run your test suite in parallel it
     spews warnings and the output files apparently get corrupted.
     (I hate to think what happens if you try to use gcov with a
     multithreaded program.)

   - gcov itself spams the tty when it runs; the information it prints
     is not particularly useful to have on the tty.

>How-To-Repeat:

Try working with gcov on a project that's being actively developed.

>Fix:

Someone(tm) should write new coverage tools.