# Infant Formula, Cow's Milk and Nutrional Blending

We compare the macronutrional content of baby formula with that of cow's milk and then try to recreate the formula by blending milk with water and adding supplements as needed.

Human milk has a vastly different macronutrient composition compared to that of other mammals, in particular cows. While the energy content is about the same, it is much richer in carbs and a lot lower in protein. The reasoning is that human babies already have a large and active brain for learning and need the sugar to fuel it. At the same time, their body does not need to grow as rapidly as that of other mammals who are already much more mature when born.

For parents who do not (or no longer) breastfeed their baby, a substitute can be used made from a powder, which mimics the nutritional content of human milk. As the baby turns older, and its digestive system is already used to different kinds of foods, cow milk (and other dairy products) may be introduced in the diet.

From other parents, I have heard that they will not feed cow's milk as is, but dilute it with water during the night. I wondered whether this approach is justified with respect to the macronutrients which led to the following analysis.

This analysis is done purely out of curiosity. Please consult a health care professional prior to changing your infant's diet!

## Data¶

As a reference for baby formula, we will use Hipp PRE BIO COMBIOTIK. They provide the following data on macronutrients:

(Energy: 66 kcal / 100 ml)
Fat: 3.5 g / 100 ml
Carbs: 7.3 g / 100 ml
Protein: 1.25 g / 100 ml



For cow's milk, we will use both whole milk as well as reduced-fat milk. The macronutrients as given for milk at LIDL are, for whole milk:

(Energy: 68 kcal / 100 ml)
Fat: 3.9 g / 100 ml
Carbs: 4.9 g / 100 ml
Protein: 3.4 g / 100 ml



And for reduced-fat milk:

(Energy: 48 kcal / 100 ml)
Fat: 1.6 g / 100 ml
Carbs: 4.9 g / 100 ml
Protein: 3.5 g / 100 ml

## Model¶

We setup an optimization model inspired by the classical diet problem, where we want to use four ingredients (water, whole milk, reduced-fat milk, lactose) to match the nutrional profile of baby formula.

My thinking was that a blend of whole milk and reduced-fat milk should get the amount of fat right, while a dilution with water will get the protein level down to the target. Finally, we can add sugar in the form of lactose powder. This is widely available in drug stores, since it is also used as a mild laxative for infants.

In [1]:
using JuMP
using GLPKMathProgInterface

In [2]:
m = Model(solver=GLPKSolverLP())

# our ingredients (without the implicit water)
@variable(m, whole_milk ≥ 0)   # in 100 ml
@variable(m, lowfat_milk ≥ 0)  # in 100 ml
@variable(m, lactose ≥ 0)      # in g

# slack in our macro targets
@variable(m, slack_fat ≥ 0)
@variable(m, slack_carb ≥ 0)
@variable(m, slack_protein ≥ 0)

# fit liquid volume (100 ml)
@constraint(m, whole_milk + lowfat_milk ≤ 1)

# match fat content (g)
@constraint(m, 3.9 * whole_milk + 1.6 * lowfat_milk == 3.5 + slack_fat)

# match carb content (g)
@constraint(m, 4.9 * whole_milk + 4.9 * lowfat_milk + lactose == 7.3  + slack_carb )

# match protein content (g)
@constraint(m, 3.4 * whole_milk + 3.5 * lowfat_milk == 1.25  + slack_protein)

# minimize the mismatch of target
@objective(m, :Min, slack_carb + slack_fat + slack_protein);

status = solve(m)

Out[2]:
:Optimal
In [3]:
@show getvalue(whole_milk) getvalue(lowfat_milk) getvalue(lactose)
@show getvalue(slack_fat) getvalue(slack_carb) getvalue(slack_protein);

getvalue(whole_milk) = 0.8974358974358976
getvalue(lowfat_milk) = 0.0
getvalue(lactose) = 2.902564102564101
getvalue(slack_fat) = 0.0
getvalue(slack_carb) = 0.0
getvalue(slack_protein) = 1.8012820512820515


It seems that we can not get a perfect match since the slack variables are not zero.

In particular, we will overshoot the protein content, or else not meet the fat content. So, with these ingredients, we will not be able to reproduce the baby formula as planned.

In order to reduce the protein content, we need another non-dairy fat source. Let us use rapeseed oil which is often recommended for preparation of (non-dairy) infant meals because of its neutral taste and rich content of essential fatty acids (omega-3).

In [4]:
m = Model(solver=GLPKSolverLP())

# our ingredients
@variable(m, whole_milk ≥ 0)   # in 100 ml
@variable(m, lowfat_milk ≥ 0)  # in 100 ml
@variable(m, lactose ≥ 0)      # in g
@variable(m, oil ≥ 0)  # in g

# no slack variables this time, we are confident in feasibility!

# fit liquid volume (100ml)
@constraint(m, whole_milk + lowfat_milk ≤ 1)

# match fat content (g)
@constraint(m, 3.9 * whole_milk + 1.6 * lowfat_milk + oil == 3.5)

# match carb content (g)
@constraint(m, 4.9 * whole_milk + 4.9 * lowfat_milk + lactose == 7.3)

# match protein content (g)
@constraint(m, 3.4 * whole_milk + 3.5 * lowfat_milk == 1.25)

# minimize supplements
@objective(m, :Min, lactose + oil);

status = solve(m)

Out[4]:
:Optimal
In [5]:
@show getvalue(whole_milk) getvalue(lowfat_milk) getvalue(lactose) getvalue(oil);

getvalue(whole_milk) = 0.36764705882352944
getvalue(lowfat_milk) = 0.0
getvalue(lactose) = 5.498529411764705
getvalue(oil) = 2.0661764705882355


The interpretation of this solution is as follows: To reproduce 100 ml of baby formula, we should mix 37 ml of whole milk with 5.5 g of lactose, 2 ml of oil and about 60 ml of water. Reduced-fat milk is not useful but both lactose and oil supplements are necessary.

Of course, this only takes the macronutrients into account, while the sophisticated baby formulas also strive to meet the micronutrient needs of the baby.

### Diluting the Drink¶

Once the infant has reached a certain age, it is not necessary to supply its body with food every 3-4 hours anymore. In particular, they do not really need to drink milk during the night and might only do so out of habit. In fact, it seems that at one year old, the infant does not need even need to get any liquid and will be fine anyways.

At the same time, frequent food intake (in particular drinking) will increase the risk of tooth decay. So it makes sense to avoid night-time feeding.

In order to ensure some peaceful sleep nonetheless, I would like to scale down the energy content of our night-time drink over time. We will end up with just water, but go there in small steps. We will use our ingredients (with supplements) from above but change the targets: Rather then fixing the macronutrient levels, we will only ask for a specific energy content and put a limit on the protein level, since the infant's kidney might not be ready to handle larger amounts yet.

In [6]:
function dilution(rel_energy=1.0)
m = Model(solver=GLPKSolverLP())

# our ingredients
@variable(m, whole_milk ≥ 0)   # in 100 ml
@variable(m, lowfat_milk ≥ 0)  # in 100 ml
@variable(m, lactose ≥ 0)      # in g
@variable(m, oil ≥ 0)  # in g

# fit liquid volume (100ml)
@constraint(m, whole_milk + lowfat_milk ≤ 1)

# match energy content (kcal)
@constraint(m, 68 * whole_milk + 48 * lowfat_milk + 4 * lactose + 9 * oil == rel_energy * 66)

# limit protein content (g)
@constraint(m, 3.4 * whole_milk + 3.5 * lowfat_milk <= 1.25)

# minimize supplements
@objective(m, :Min, lactose + oil);

status = solve(m)
sol = getvalue(whole_milk), getvalue(lowfat_milk), getvalue(lactose), getvalue(oil)

return status, sol
end

Out[6]:
dilution (generic function with 2 methods)
In [7]:
status, sol = dilution(1)

Out[7]:
(:Optimal, (0.36764705882352944, 0.0, 0.0, 4.555555555555555))

By using a parameter value of 1, we will get a drink with the same energy content as our baby formula, but with a more relaxed relation of macronutrients.

Again, reduced-fat milk is not used and we find that only an oil supplement is needed but no sugar supplement. This makes sense, since the energy content of fat is higher than that of sugar and our objective here was to minimize the total mass of supplements.

Let us now compute a sequence of solutions with increasingly low energy content, over the course of two weeks.

In [8]:
# iterate over parameter range and collect solutions
r = range(1.0, stop=1e-6, length=14)
sols = []
for rel_nrg in r
status, sol = dilution(rel_nrg)
sol = collect(sol)' # tuple to row vector
push!(sols, sol)
end

# collect all solutions values into one 2d-array
sols = vcat(sols...)

Out[8]:
14×4 Array{Float64,2}:
0.367647    0.0  0.0  4.55556
0.367647    0.0  0.0  3.99145
0.367647    0.0  0.0  3.42735
0.367647    0.0  0.0  2.86325
0.367647    0.0  0.0  2.29915
0.367647    0.0  0.0  1.73505
0.367647    0.0  0.0  1.17094
0.367647    0.0  0.0  0.606842
0.367647    0.0  0.0  0.0427396
0.298643    0.0  0.0  0.0
0.223983    0.0  0.0  0.0
0.149322    0.0  0.0  0.0
0.0746615   0.0  0.0  0.0
9.70588e-7  0.0  0.0  0.0      

We can see that we should never use more than 37 ml of whole milk per 100 ml of our drink. In the beginning, this amount of milk stays the same and the oil supplementation will go down to account for the reduced energy content. Only when there is no more oil to be added, does the amount of milk go down. At this point, we will only mix whole milk and water.

Let us also analyze the macronutrient composition of our dilutions:

In [9]:
whole_milk, oil = sols[:, 1], sols[:, 4]

fat = 3.9 * whole_milk + oil
carbs = 4.9 * whole_milk
protein = 3.4 * whole_milk

energy = 9 * fat + 4 * carbs + 4 * protein

macros = hcat(fat, carbs, protein, energy)

Out[9]:
14×4 Array{Float64,2}:
5.98938     1.80147     1.25      66.1103
5.42528     1.80147     1.25      61.0334
4.86118     1.80147     1.25      55.9565
4.29707     1.80147     1.25      50.8795
3.73297     1.80147     1.25      45.8026
3.16887     1.80147     1.25      40.7257
2.60477     1.80147     1.25      35.6488
2.04067     1.80147     1.25      30.5719
1.47656     1.80147     1.25      25.495
1.16471     1.46335     1.01539   20.3973
0.873532    1.09751     0.761541  15.298
0.582356    0.731678    0.507695  10.1987
0.29118     0.365841    0.253849   5.09938
3.78529e-6  4.75588e-6  3.3e-6     6.62912e-5

Not surprisingly, our drink is relatively high in fat, more so than either whole milk or baby formula.

Similarly, we can compute the relative energy from macros in the above sequence:

In [10]:
macro_nrg = hcat(9 * fat, 4 * carbs, 4 * protein) ./ energy

Out[10]:
14×3 Array{Float64,2}:
0.815371  0.108998  0.0756312
0.800013  0.118065  0.0819224
0.781868  0.128777  0.0893552
0.760102  0.141626  0.0982713
0.733511  0.157325  0.109164
0.70029   0.176937  0.122773
0.657607  0.202135  0.140257
0.600748  0.235703  0.163549
0.521243  0.28264   0.196117
0.513909  0.286969  0.199122
0.513909  0.286969  0.199122
0.513909  0.286969  0.199122
0.513909  0.286969  0.199122
0.513909  0.286969  0.199122 

Energy from fat gos down, while the contribution from carbs and protein goes up at the same rate:

In [11]:
macro_nrg[:, 2] ./ macro_nrg[:, 3]

Out[11]:
14-element Array{Float64,1}:
1.4411764705882357
1.4411764705882355
1.4411764705882355
1.4411764705882357
1.4411764705882353
1.4411764705882355
1.4411764705882355
1.4411764705882355
1.4411764705882357
1.4411764705882353
1.4411764705882355
1.4411764705882353
1.4411764705882353
1.4411764705882353

That's it for this analysis. I'm sorry for the lack of visualization!

We use a physiological interpretation of consonance and dissonant musical sound for human listeners and use a dissonance score to rate different possible triads (combinations of three notes in a scale). This is first done on the conventional scale of 12 equidistant tones in an octave, as well as the Bohlen-Pierce scale.

In [1]:
using Plots
gr()

Out[1]:
Plots.GRBackend()

### Plomp-Levelt dissonance curve¶

As a basis of our investigation, we use the Plomp-Levelt dissonance curve as explained in this article by William Sethares.

The curve models the fact that similar frequencies appear dissonant because of "friction" between the adjacent sensors in our ears.

In [2]:
"Dissonance of sound with two sinusoidal sounds with given frequencies and amplitudes."
function diss(freq1, ampl1, freq2, ampl2)
if freq2 < freq1 # swap args
return diss(freq2, ampl2, freq1, ampl1)
end
s = 0.24 * (freq2 - freq1) / (0.0207 * freq1 + 18.96)
5.0 * ampl1 * ampl2 * (exp(-3.51 * s) - exp(-5.75 * s))
end

Out[2]:
diss

Let's draw the curve for a pair of sine waves, the first fixed at given base frequency and the other going through a range of relatively higher frequencies.

In [3]:
basefreq = 440.0
ratrange = range(1.0, stop=3.0, length=200)
otherfreq = basefreq * ratrange
disss = diss.(basefreq, 1.0, otherfreq, 1.0);

In [4]:
plot(otherfreq, disss, xlabel="Frequency", ylabel="Dissonance", legend=false)

Out[4]:

### Dissonance with harmonics¶

Rather than analyzing dissonance between sinusoidal sounds, we also want to consider sounds with rich harmonics. These have sounds at multiple frequencies with declining amplitudes. When computing the dissonance between two pitches with harmonics, we have to add the dissonance values for all combinations of frequencies.

In [5]:
# Let's first give formulas for the amplitudes of common waveshapes.

enumrange(number::Int) = collect(range(1.0, stop=number, length=number))

abstract type Wave end

struct Sine <: Wave end
harm(::Sine, number::Int) = vcat([1.0], zeros(number - 1))

struct Sawtooth <: Wave end
harm(::Sawtooth, number::Int) = 1.0 ./ enumrange(number)

struct Triangle <: Wave end
harm(::Triangle, number::Int) = [(((i+2)%4)-2)*(i%2) for i=1:number] ./ enumrange(number).^2

struct Square <: Wave end
harm(::Square, number::Int) = [i%2 for i in 1:number] ./ enumrange(number)

Out[5]:
harm (generic function with 4 methods)
In [6]:
@show harm(Sine(), 9)
@show harm(Sawtooth(), 9)
@show harm(Square(), 9)
@show harm(Triangle(), 9);

harm(Sine(), 9) = [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
harm(Sawtooth(), 9) = [1.0, 0.5, 0.333333, 0.25, 0.2, 0.166667, 0.142857, 0.125, 0.111111]
harm(Square(), 9) = [1.0, 0.0, 0.333333, 0.0, 0.2, 0.0, 0.142857, 0.0, 0.111111]
harm(Triangle(), 9) = [1.0, 0.0, -0.111111, 0.0, 0.04, 0.0, -0.0204082, 0.0, 0.0123457]


Let's also quickly check how our approximated waveforms look:

In [7]:
xx = range(0, stop=4π, length=100)

function yy(wave, n)
coefs = harm(wave, n)
sum(coefs[f] * sin.(f*xx) for f=1:n)
end

plot(xx,[yy(w, 20) for w in [Sawtooth(), Square(), Triangle()]])

Out[7]:
In [8]:
# We add a new type to couple frequency and amplitute of a harmonic overtone

struct Tone
freq::Float64
ampl::Float64
end

diss(t1::Tone, t2::Tone) = diss(t1.freq, t1.ampl, t2.freq, t2.ampl)

Out[8]:
diss (generic function with 2 methods)
In [9]:
# More functions to generate all overtones

function harmonics(wave::Wave, basefreq::Float64, number::Int)
# frequencies with integer multiples
freqs = basefreq .* enumrange(number)
ampls = abs.(harm(wave, number))
# keep tones with nonzero amplitude
[Tone(tup...) for tup in zip(freqs, ampls) if tup[2] > 0.0]
end

Out[9]:
harmonics (generic function with 1 method)
In [10]:
@show harmonics(Sine(), 440.0, 4)
@show harmonics(Triangle(), 440.0, 4);

harmonics(Sine(), 440.0, 4) = Tone[Tone(440.0, 1.0)]
harmonics(Triangle(), 440.0, 4) = Tone[Tone(440.0, 1.0), Tone(1320.0, 0.111111)]

In [11]:
function diss(tones::AbstractArray{Tone})
res = 0.0
# iterate over all pairs
n = length(tones)
for i = 1:n
for j=i+1:n
res += diss(tones[i], tones[j])
end
end
res
end

Out[11]:
diss (generic function with 3 methods)
In [12]:
function together(tones1::AbstractArray{Tone}, tones2::AbstractArray{Tone})
# TODO: should do proper merge and match those tones with same frequency?
vcat(tones1, tones2)
end

function diss(tones1::AbstractArray{Tone}, tones2::AbstractArray{Tone})
diss(together(tones1, tones2))
end

Out[12]:
diss (generic function with 4 methods)

As a first example, we look at the square waveform approximated with 10 harmonices and draw the dissonance curve for two pitches played simultaneously.

Note the local maxima near frequency rations 1 and 2 and the minima at integer (or "simple") ratios.

In [13]:
numharm = 10
basetone = harmonics(Square(), basefreq, numharm)
othertones = [harmonics(Square(), of, numharm) for of in otherfreq]
disss = [diss(basetone, ot) for ot in othertones]

plot(ratrange, disss, legend=false, xlabel="Freq ratio", ylabel="Dissonance", title="Square $basefreq")  Out[13]: Next, we repeat the experiment, now based on an approximation of the sawtooth waveform. Because of the richer harmonices, there is more interaction and more dissonance (in absolute terms). As a consequence, the dissonance curve shows more local minima, again corresponding to "simple" frequency ratios, that is, ratios of small integers. In [14]: numharm = 10 basetone = harmonics(Sawtooth(), basefreq, numharm) othertones = [harmonics(Sawtooth(), of, numharm) for of in otherfreq] disss = [diss(basetone, ot) for ot in othertones] plot(ratrange, disss, legend=false, xlabel="Freq ratio", ylabel="Dissonance", title="Sawtooth$basefreq")

Out[14]:

We will now compute the dissonaces for triads of tones in an analoguous fashion. But this time, we will only evaluate a discrete set of pitches, based on a 12-tone scale, dividing the octave in equal steps.

In [15]:
# dissonance for triads
function diss(tones1::AbstractArray{Tone}, tones2::AbstractArray{Tone}, tones3::AbstractArray{Tone})
diss(together(together(tones1, tones2), tones3))
end

Out[15]:
diss (generic function with 5 methods)
In [16]:
# range of one octave
halfstep = 2.0^(1.0/12.0)
@show halfstep
ratrange = [halfstep^i for i=0:12]

basefreq = 440.0
otherfreq = basefreq * ratrange;

halfstep = 1.0594630943592953


We start using the square waveform. In the heatmap below, we can see that dissonance is high whenever a single halfstep is present in the triad. This is true for the horizontal and vertical lines that are off by one from the origin, as well as the two diagonals off by one from the center. To a lesser extent, this pattern repeats for 11 halfsteps (one octave up, one halfstep down).

In [17]:
# start with square wave again
numharm = 10
basetone = harmonics(Square(), basefreq, numharm)
othertones = [harmonics(Square(), of, numharm) for of in otherfreq]

# compute dissonances of all triads
disss = [diss(basetone, ot1, ot2) for ot1 in othertones, ot2 in othertones]

heatmap(0:12, 0:12, disss, aspect_ratio=1.0, title="Square wave on 12-tone octave")

Out[17]:

If we use the sawtooth wave instead, we see a richer structure in the dissonance heatmap. Here, we should also be able to recognize familiar concepts such as the major or minor triads.

In [18]:
# now use richer sawtooth wave
numharm = 10
basetone = harmonics(Sawtooth(), basefreq, numharm)
othertones = [harmonics(Sawtooth(), of, numharm) for of in otherfreq]

# compute dissonances of all triads
disss = [diss(basetone, ot1, ot2) for ot1 in othertones, ot2 in othertones]

heatmap(0:12, 0:12, disss, aspect_ratio=1.0, title="Sawtooth wave on 12-tone octave")

Out[18]:

### Bohlen-Pierce Scale¶

One way to view the conventional western scale is as a division of an octave (that is, a 2:1 ratio of frequency) in 12 (more or less) equal steps. Alternatively, one can start at the ratios 4:5:6 (appearing naturally as harmonics) and collect pitches that are reached from these recursively.

In the Bohlen-Pierce scale, we range over a tritave (a 3:1 ratio) in 13 steps. This can also be derived using ratios 3:5:7. The scale is well-suited for instruments that only feature odd harmonics, such as the clarinet or the pan flute.

In [19]:
# range of one tritave
bpstep = 3.0^(1.0/13.0)
@show bpstep
ratrange = [bpstep^i for i=0:13]

basefreq = 440.0
otherfreq = basefreq * ratrange;

bpstep = 1.0881822434633168

In [20]:
# start with square wave again
numharm = 10
basetone = harmonics(Square(), basefreq, numharm)
othertones = [harmonics(Square(), of, numharm) for of in otherfreq]

# compute dissonances of all triads
disss = [diss(basetone, ot1, ot2) for ot1 in othertones, ot2 in othertones]

heatmap(0:13, 0:13, disss, aspect_ratio=1.0, title="Square wave on BP tritave")

Out[20]:

The heatmap above looks fairly boring, with not much dissonance apart from the obvious lines. Maybe this is related to the wider spacing of the steps in this scale.

We will also look at the sawtooth wave, even though we should expect high dissonance through-out.

In [21]:
# now use richer sawtooth wave
numharm = 10
basetone = harmonics(Sawtooth(), basefreq, numharm)
othertones = [harmonics(Sawtooth(), of, numharm) for of in otherfreq]

# compute dissonances of all triads
disss = [diss(basetone, ot1, ot2) for ot1 in othertones, ot2 in othertones]

heatmap(0:13, 0:13, disss, aspect_ratio=1.0, title="Sawtooth wave on BP tritave")

Out[21]:

### Larger Pitch Ranges¶

We will generate some more heatmaps covering two octaves or tritaves, each time based on sawtooth waves.

First the 12-tone octaves:

In [22]:
# range of two octaves
ratrange = [halfstep^i for i=0:24]
basefreq = 440.0
otherfreq = basefreq * ratrange

# sawtooth wave
numharm = 10
basetone = harmonics(Sawtooth(), basefreq, numharm)
othertones = [harmonics(Sawtooth(), of, numharm) for of in otherfreq]

# compute dissonances of all triads
disss = [diss(basetone, ot1, ot2) for ot1 in othertones, ot2 in othertones]

heatmap(0:24, 0:24, disss, aspect_ratio=1.0, title="Sawtooth wave on 12-tone octaves")

Out[22]:

Then the BP tritaves:

In [23]:
# range of two tritaves
ratrange = [bpstep^i for i=0:26]
basefreq = 440.0
otherfreq = basefreq * ratrange

# sawtooth wave
numharm = 10
basetone = harmonics(Sawtooth(), basefreq, numharm)
othertones = [harmonics(Sawtooth(), of, numharm) for of in otherfreq]

# compute dissonances of all triads
disss = [diss(basetone, ot1, ot2) for ot1 in othertones, ot2 in othertones]

heatmap(0:26, 0:26, disss, aspect_ratio=1.0, title="Sawtooth wave on BP tritaves")

Out[23]:

# Marginal Net Income in Germany

We investigate the tax burden and insurance cost for different scenarios of gross income using a simplified model of the situation in Germany. The goal is to compute the marginal net income for additional gross income. From that, we can also derive the net hourly income for part-time employment.

This analysis is done using Julia in a Jupyter notebook.

In [1]:
# assume a typical monthly income
gross_monthly = 37873 / 12

Out[1]:
3156.0833333333335

### Social costs¶

For simplicity, we ignore the possibility of private health insurance and assume that our typical employee participates in the public social system. This includes health insurance, unemployment insurance and payments to the pension system.

These payments are split evenly between the employer and employee. This means that the labor cost of the employer is higher than the gross salary paid.

Source: nettolohn.de

In [2]:
# only includes the employee's share
pension = 0.186
unemployment = 0.03
health = 0.146
longterm_care = 0.028
base_rate = 0.5 * (pension + unemployment + health + longterm_care)
health_extra = 0.011
employer_rate = base_rate
employee_rate = base_rate + health_extra
@show employer_rate, employee_rate;

(employer_rate, employee_rate) = (0.195, 0.20600000000000002)

In [3]:
# effective labor cost for employer
real_monthly = (1.0 + employer_rate) * gross_monthly

Out[3]:
3771.5195833333337

However, these relative rates are only applied up to specific bound for the gross income. So, let's define some functions to compute the correct costs.

In the case of self-employment, there is also a virtual minimum income that is used as a reference, which we will ignore for simplicity.

Source: Beitragsbemessungsgrenze

In [4]:
function social_cost(gross_monthly)
pension       = 0.186 * min(gross_monthly, 6500.0)
unemployment  = 0.030 * min(gross_monthly, 6500.0)
health        = 0.146 * min(gross_monthly, 4425.0)
longterm_care = 0.146 * min(gross_monthly, 4425.0)
pension + unemployment + health + longterm_care
end

employer_cost(gross_monthly) = 0.5 * social_cost(gross_monthly)
employee_cost(gross_monthly) = 0.5 * social_cost(gross_monthly) + 0.011 * gross_monthly
total_cost(gross_monthly) = employer_cost(gross_monthly) + employee_cost(gross_monthly)
employer_total(gross_monthly) = employer_cost(gross_monthly) + gross_monthly;


Let's visualize the total social cost relative to the real monthly labor cost.

In [5]:
using Plots
gr()

Out[5]:
Plots.GRBackend()
In [6]:
gross_range = 400.0:20.0:10000.0
real_income = employer_total.(gross_range)
relative_cost = total_cost.(gross_range) ./ real_income
plot(real_income, relative_cost, xlim=(0,11000), ylim=(0.0, 0.5), legend=false,
xlabel="effective monthly income", ylabel="rel social cost")

Out[6]:

So, higher gross (or effective) income leads to a smaller relative social cost. We can repeat that plot for the employee's point of view:

In [7]:
plot(gross_range, employee_cost.(gross_range)./gross_range, xlim=(0,10000), ylim=(0.0, 0.3), legend=false,
xlabel="gross monthly income", ylabel="employee's share of social cost")

Out[7]:

The income tax only applies to the part of the gross income of which the social cost has been subtracted already. Further, some portion of that remainder is also free from taxes.

In [8]:
taxable_income(gross_monthly) = gross_monthly - employee_cost(gross_monthly)

Out[8]:
taxable_income (generic function with 1 method)

### Income tax¶

The tax rate is a piece-wise defined function. We assume an unmarried employee with no kids who is not taxable by any church.

The source below actually contains flow charts with details conditions, thresholds and rounding of intermediate results. I can't be bothered to understand all of that, so I will try to extract the essential information.

Source: BMF Steuerrechner

In [9]:
function income_tax(yearly)
if yearly <= 9000
return 0
elseif yearly < 13997
y = (yearly - 9000) / 10000
rw = y * 997.8 + 1400
return ceil(rw * y)
elseif yearly < 54950
y = (yearly - 13996) / 10000
rw = y * 220.13 + 2397
return ceil(rw * y + 948.49)
elseif yearly < 260533
return ceil(yearly * 0.42 - 8621.75)
else
return ceil(yearly * 0.45 - 16437.7)
end
end

Out[9]:
income_tax (generic function with 1 method)
In [10]:
yearly_range = 5000:500:100000
plot(yearly_range, income_tax.(yearly_range), legend=false, xlim=(0,100000), ylim=(0,35000),
xlabel="yearly taxable income", ylabel="income tax")

Out[10]:
In [11]:
plot(yearly_range, income_tax.(yearly_range) ./ yearly_range, legend=false, xlim=(0,100000), ylim=(0,0.4),
xlabel="yearly taxable income", ylabel="income tax rate")

Out[11]:

### Net income¶

Now, let's combine the social costs and taxes to compute the net income.

In [12]:
function net(gross_monthly)
taxable = taxable_income(gross_monthly)
taxes = income_tax(ceil(12 * taxable)) / 12
taxable - taxes
end

net_income = net.(gross_range)

plot(gross_range, net_income, legend=false, xlim=(0,10000), ylim=(0,6000),
xlabel="monthly gross income", ylabel="monthly net income")

Out[12]:

This looks surprisingly linear. Let's also plot the relative net income compared to the effective cost.

In [13]:
rel_net = net_income ./ real_income
plot(real_income, rel_net, legend=false, xlim=(0,12000), ylim=(0,0.7),
xlabel="effective monthly income", ylabel="rel net income")

Out[13]:

Interestingly, this curve is not monotonically decreasing. In fact, there seems to be a minimum at a gross income of 4425 (real income of about 5300), which is the upper reference value for the health insurances.

### Marginal income¶

The analysis above can be slightly misleading. If we are currently earning a certain income and have an opportunity to raise the income, this might also decrease our relative net income. However, higher rates for social cost and income tax are applied equally to the previous and additional income.

If we want to show how much net income we can retain for each additional EUR earned, we have take some more care. Here, we approximate the slope of the net income as a function of real income using a symmetric difference quotient. There are some jumps in that curve since our net income is not smooth.

In [14]:
finite_diffs = (net_income[3:end] - net_income[1:end-2]) ./ (real_income[3:end] - real_income[1:end-2])
plot(real_income[3:end], [finite_diffs rel_net[3:end]], xlim=(0,12000), ylim=(0,0.65),
xlabel="effective monthly income", label=["marginal income" "relative net income"],
legend=:bottomright)

Out[14]:

### Part-time work and hourly income¶

We have seen that a lower gross income often corresponds to a higher relative net income. If we have the option to reduce our working hours, we should be able to increase our income per hour.

Let's assume a 40 hour work-week and 4 weeks per month.

In [15]:
monthly_hours = 40.0 * 4.0

Out[15]:
160.0
In [16]:
# median income
median_net = net(gross_monthly)

Out[16]:
1929.0545833333333
In [17]:
median_net / monthly_hours

Out[17]:
12.056591145833334
In [18]:
using ColorBrewer

In [19]:
# hourly income for different monthly gross incomes
hours = 10:40
steps = 6
gross = range(1000, stop=8000, length=steps)'
colors = ColorBrewer.palette("YlGnBu", steps + 3)[4:end]
parttime_net = net.((hours / 40)*gross)
hourly_net = parttime_net ./ (hours * 4)
plot(hours, hourly_net, labels=gross, ylim=(0,40),
color_palette=colors,
xlabel="hours per week", ylabel="hourly net income",
title="Hourly net income for part-time work")

Out[19]:

As we can see above, we can increase our hourly income by working fewer hours every week. This effect is more clear if we look at the hourly net income of part-time work relative the hourly net income for full-time work:

In [20]:
rel_hourly = hourly_net ./ hourly_net[end, :]';

In [21]:
plot(hours, rel_hourly, labels=gross, color_palette=colors,
xlabel="hours per week", ylabel="relative hourly net income",
title="Relative hourly net income for part-time work")

Out[21]:

For low values of gross income, reducing the weekly work hours will not affect the hourly income any more. Medium values show the most promise, with the largest inrease in hourly income. With a high enough income, reducing the work load yields less and less increase in hourly income.