In the end I needed two scripts and one go programm to implement this.
The first script looks like this:
#!/bin/bashThe main purpose is to start the device backing program before the worker1 is started in an extra namespace and to kill it after the worke1 has finished. Oh and the unshare command is pretty long to type in each time I've wanted to start the Owncloud.
cd ubuntu_17_04/dev/
./devices &
D_PID=$!
cd -
unshare -m -p -f --mount-proc -r ./worker1
kill $D_PID
The worker1.go looks like this:
package mainThe mount binds at the start are pretty boring. The interesting magic happens at the exec.Command.
import (
"log"
"os"
"os/exec"
"syscall"
)
func main() {
err := syscall.Mount("/home/user/owncloud/data", "/home/user/owncloud/ubuntu_17_04/home/user/owncloud", "", syscall.MS_BIND, "")
if err!=nil {
log.Println(err)
return
}
err = syscall.Mount("/proc","ubuntu_17_04/proc","",syscall.MS_BIND,"")
if err!=nil {
log.Println(err)
return
}
cmd := exec.Command("/worker2.sh")
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.SysProcAttr = &syscall.SysProcAttr {
Chroot: "/home/user/owncloud/ubuntu_17_04/",
Cloneflags: syscall.CLONE_NEWUSER,
UidMappings: []syscall.SysProcIDMap{
{
ContainerID: 33,
HostID: os.Getuid(),
Size: 1,
},
},
GidMappings: []syscall.SysProcIDMap {
{
ContainerID: 33,
HostID: os.Getgid(),
Size: 1,
},
},
}
err = cmd.Run()
if err!=nil {
log.Println(err)
return
}
}
During the development I didn't start worker2.sh but an interactive bash. It turned out that I needed to provide cmd.Stdin, cmd.Stdout AND cmd.Stderr to get a normal bash.
The cmd.SysProcAttr specifies a new root for the command and also opens a new user namespace with another UID/GID mapping so that php-fpm is satisfied. I've choosen the id 33 as there was a user/group www-data with this id.
The reason why I've written the program was that I've read somewhere (sorry couldn't find the source anymore) that in nested user namespaces only the process with the pid 1 can do the remapping of the ids. When you run chroot in a shell script you aren't anymore id 1 and that is why it failed.
The worker2.sh is pretty simple:
#!/bin/bash
cd /home/joerg/data/owncloud/
./start_cloud.sh
And that's it.
This post is part of a series:
- Owncloud in a container
- Container and namespaces
- Getting the runtime files
- Device files
- Putting all together