How to mount a host directory in an LXC container with write access

,

Hello,

I’m wanting to mount a host directory in an LXC container with write access.

I’m using Ubuntu Jammy.

I’ve done some searching and found this post

One of the responses describes how to do what I’m trying to achieve.

But in closing the poster says

NOTE : Please note that before switching to this profile, make sure that all directories or files whose owner/group is debian should be deleted (and probably recreated after the switch). This is because after the uid and gid mapping, their ownership will become invalid. I originally thought since I am just mapping 1000 to 1000 everything should be fine, but I think I missed something here and it would be great if some one can advice on how to resolve this without the hack.

So, there are issues with the posted “solution”.

I was hoping that someone could review the solution and advise how to improve it, or is there another, better way of providing write access to the container?

Thanks

VW

Is it LXD or old plain LXC?

I’m new to LXD/LXC, so not sure how to answer this.

The only LXD command that I have used is: lxd init

All other container commands have been: lxc

lxd init means, it is LXD. Have you tried https://www.cyberciti.biz/faq/how-to-add-or-mount-directory-in-lxd-linux-container/ ?

That article didn’t appear in any of my previous searches.

Thanks for sharing, I will see how I go.

I use both NFS/EFS and standard dirs mounted inside LXD containers in read-write mode. So let me know if you need any more help.

Hello,

Unfortunately I do need more help.

The folders that I’m wanting to make accessible to the container are currently owned by root.

So, I have created another user, which is a non-root user, and my plan was to grant that user full access to the folders and then map the container to that user.

Sounds simple enough in theory, but now that I’m actually trying to issue the command with chmod, it’s not that simple.

I can’t see how to add a user to the folder, other than to use chown, which is not the answer here.

Should I be using a group?

Yes, you need to map those using group.

Step 1 - Create a new dir on the LXD host

Say you want to share /dir1

sudo mkdir /dir1

Step 2 - Create group for sharing dirs between LXD host and container

Here is how it looks with group ID 300 (you need to make those groups):

# Make a new read-only group named `lxdclientro`
sudo addgroup --system --gid 300 lxdclientro

# Make a new read and write only group named `lxdclientrw` for the /dir1/
sudo adduser --system --home  /dir1/ --uid 300 --gid 300 lxdclientrw

Step 3 - Setup correct permission on /dir1

Here are commands for /dir1/

sudo chown lxdclientrw:lxdclientro /dir1/
sudo chmod 750 /dir1/
sudo chmod g+s  /dir1/

Real magic happens below where you map root on the LXD host GID 1 to our newly created UID/GID for /dir1/ (run all command as the root user, first do sudo -s and then type it)

echo 'root:300:1' >> /etc/subuid
echo 'root:300:1' >> /etc/subgid

Now your LXD host is ready to share /dir1/ in read-write mode with containers.

Step 4 - Find UID and GID inside container to map

Let us say you have container named ‘nginx’. Log into it:

lxc exec nginx sh

Say now there is an app user inside that container called ‘myapp’ with UID 100. Use the grep id id to find that users UID and GID. For example:

grep ^myapp /etc/passwd
grep ^myapp /etc/group
id -u myapp
id -g myapp 

I get 100 as UID and GID. Note down both UID and GID. If you don’t have specific user/group inside container you need to create it. That is how security policy is going to work out. Otherwise LXD will block access.

Step 5 - Map UID and GID on the LXD host

Now exit from lxc container and back to LXD host. The 300 (UID on the LX D host) is going to be mapped to nginx container with 100 uid. Similarly, the 300 (GID on the LX D host) is going to be mapped to nginx container with 100 GID. The command is as follows for container named nginx

echo -en "uid 300 100\ngid 300 101"  | lxc config set nginx raw.idmap -

Restart the container named nginx:

lxc restart nginx

Step 6 - Add /dir1/ in read-write mode to container named nginx

The syntax is:

lxc config device add nginx shareddisk disk source=/dir1 path=/dir1

This will mount the /dir1/ from LXD host to container named nginx at /dir1/. You can log into the container:

lxc exec nginx sh

And see it:

ls -ld /

Because of security policy on both LXD host and container only UID/GID 100 inside container named nginx can update/edit files inside the /dir1/. So inside container you need to switch to that user named myapp.

su - myapp
runuser -u myapp
cd /dir1
mkdir foo
rmdir foo

Hello nixcraft,

First, let me say a big “thank you” for that very thorough latest post.

However, unfortunately I have a few questions.

In step 4, you say, assume now there is an app user inside the container called ‘myapp’.

At the moment, the only user I have inside the container is the default user ‘root’. Can I map root to a user outside the container, or do I need to create a user other than root inside the container to map to the outside?

The reason that I ask, is that I am presently running a couple of backup applications as root on a remote host. And those applications are currently connecting to the container host as root. These backup applications need to run as root on the remote host to have full access to the system being backed up.

I’m wondering what will happen if I need to specify a user other than root in the container.

The other question that I have, is in step 3 you do the following

sudo chown lxdclientrw:lxdclientro /dir1/

The concern / question I have here, is that in my case I’m not creating a new directory to share with the container. I have an existing directory structure that is presently owned by root; and I’m concerned that if I change the owner on that structure it might break the backup.

What are your thoughts?

Thanks
VW

LXD is unprivileged by default. As a result, a process running as UID 0 in the container will run as UID 100000. This is for security reasons. There is a post indicating that it is possible to do what you want Id mapping shared directory user and root access - LXD - Linux Containers Forum, but I must warn you it is a significant security risk.

Hello nixcraft,

I think I might not have clearly described what I’m wanting to do.

Or possibly, what I’m “wanting to do” and what I really should be “wanting to do” are two different things. I’m relatively new to Linux and that is part of the problem.

The problem that I’m trying to solve is this:

I’m currently backing up a remote system, where the backup process runs as root, to an off-site system (the container host) via ssh / sshfs.

So that backup process currently has root access to the container host. Which from a security perspective is not ideal.

What I was hoping to do with the container, was forward ssh to the container (which I know how to do) and then grant the container access to the remote backup system mount via a user that exists on the container host.

So, I create a user on the container host. That is a non-root user. And grant that user full access to the remote backup system mount.

Then have the container run as root, but effectively only have “user-level” access to the file system outside of the container.

Conceptually it seems logical and straight-forward to me. But trying to do it in practice is proving to be somewhat more challenging.

Maybe containers aren’t the answer, for this task.

Are you looking for a minimal Linux user account that can back up certain dirs without root access remotely? If the answer is yes, using LXD is not a good idea. Do you need to back up certain dirs? Say like /var/www/ and /etc/? Do you need a whole / (root file system)?

On the remote system, which is the backup source, I am backing up the entire system. So the backup app is running as root and it needs to be.

It is then connecting to the system, that we have been discussing in this thread, and it is accessing one folder. Where it is copying the backup.

So, on that destination folder, it’s need full access. But on the destination system, there is only a single folder that the process needs full access to.

I think what I will do, is that I will try to implement your earlier notes in the way that makes sense to me and I will see how I go.

If I encounter issues, I will post a question and perhaps you can help.

I will be using the default root user in the container.

So I will be attempting to map the container root user to a non-root user outside of the container.

Hello nixcraft,

I’ve had a go at applying all of your earlier steps, and it appears to be “almost” working as I would like.

The directory is mapped to the container successfully. Yay!

And the container does have read-write access at the mapped directory level (i.e./dir1).

However, in real life the mapped directory is an existing directory, and the mapped directory also contains some sub directories.

The container currently doesn’t have write access to the sub directories.

So it looks like I will need to apply some chown / chmod commands to the sub directories.

Can you advise which commands will need to be applied to the sub directories of the mapped directory (e.g /dir1/dir2)

Thanks
VW

Since you mapped container UID to host, set inside container.

lxc exec {container} sh
runuser -u {user} -- sh # login and do your work for the dir1
chown -R user:group /dir1
chmod -R 0xxx /dir1

Hello nixcraft,

Have this working.

A BIG thank you for your assistance.

Cheers
VW

1 Like