Egress to public IP namespaces
# netmaker
a
Egress to public IP namespaces
b
You can have multiple ingress. You can have multiple egress as well as long as the egress ranges do not overlap or you use acls to control which other nodes in the network can reach the egress gateway
a
I am not sure about the overlap part. Can you explain it a bit more please?
my use case is actually: I have 5 nodes in 5 different regions, UK, US, AU, etc. and they are part of my k3s cluster. I want to use these 5 nodes to make ingress and egress so that I can access the public internet through any region like a VPN with the netmaker external clients. Also, want to make sure that these 5 nodes can communicate with each other also.
Also, I would like to access the cluster network from these same endpoints. Suppose my k3s cluster network is on 10.40.0.0/16 and services at 10.41.0.0/16. I would like to somehow be able to use the external clients to access these cluster pods and services directly and maybe limit which external clients can actually access these cluster networks through ACL? Will that be possible in this scenario?
b
Once you set up one egress to be a VPN, all traffic to public internet from all nodes will be be routed through the egress network. To do what you want set up two networks. One that allows the 5 nodes to communicate and another using acls in which the nodes cannot communicate and create the gateways on the second network. Alternatively create 6 networks. The first to allow the 5 nodes to communicate and the next 5 would just have a single node which acts as the ingress/ egress for its region
a
Actually, the inter nodes commuinication is not a must to have for starter. In my k3s cluster these nodes are working fine and communicating with each other. I was thinking about this to make sure that those ext clients can communicate with all these 5 nodes. But now when I think about it, it doesn't actually need to. Also, I didn't find any good guide on netmaker docs and netmaker k8s yaml files to implement multiple connections on the netclient daemonset. I was also thinking about doing PRs when I get this whole setup working so that you guys can keep it as an example doc for other people.
one more thing I wanna say that yesterday I almost crashed my whole k3s cluster trying to do this. I think I should not have done all the networking on my host level. But the default daemonset yaml file does everything in host level -> https://github.com/gravitl/netmaker/blob/master/k8s/client/netclient-daemonset.yaml
my k3s installation is already running on flannel wireguard backend.
b
Any contributions to docs would be most welcome
a
I really want to contribute more on the k8s side of things. But have to figure out first how to achieve this complex network first.
b
That would also be we!comed
a
any idea how do I join multiple networks through that daemonset yaml?
the dockerhub image for netclient also does not document all the environment variables the image can take.
b
My knowledge of k8 is limited. Need @jolly-london-20127 to jump in
a
He seems to be offline right now. I will just state the network architecture I am trying to achieve and he might reply when he comes online.
it is for the server. Any similar one for the netclient? And also what's the difference between netclient and the netclient-go image?
b
Netclient-go uses userspace wireguard rather than kernel so it will be slower
a
I understand.
b
netclient --help
a
alright I can see all the parameters now.
but does all these parameters map directly to docker environment variables?
b
Short answer, no
a
https://github.com/gravitl/netmaker/blob/master/k8s/client/netclient-daemonset.yaml ok I see what is happening here. basically netclient is saving a configuration for all its networks and settings I think inside the etc/netclient folder. And the folder is mapped inside the host machine. So, if i need to join multiple networks, I need to manually execute the netclient join command as many times as I want inside the netclient container and it will save the configuration on host for persistence. Later even if the container dies and reschedules, it will still have access to those configuration files and it will use them to bootstrap the join for all the networks.
I might be wrong. Hope @jolly-london-20127 can correct me on this.
j
yes @acoustic-easter-26071 you figured it out. You have 2 options: 1. execute the join command from the running netclient containers 2. deploy a second daemonset, but mount a different host directory (for instance, /etc/netclient-2)
a
yeah, but if one netclient can join multiple networks then I should be a little conservative about the resources and not deploy multiple I guess.
j
yes, that's true
option #2 would be a little more kubernetes-native since typically you dont want to execute commands in a running container to keep it "stateless", but I think option #1 makes more sense here
a
but the daemonset yaml is making it stateless anyway because it is storing everything in a hostpath. So, even if the container dies and spins up again, it can still get those configurations and join the networks.
@jolly-london-20127 https://github.com/gravitl/netmaker/blob/master/k8s/server/netmaker-server.yaml in this yaml file there are many containerports defined which are for wg interfaces I suppose starting from 31821 and using the nodeport service for exposing them. May I know what these ports are being used for exactly and is it possible to change these ports to different containerports?
And how do I turn off hostnetwork usage on this -> https://github.com/gravitl/netmaker/blob/master/k8s/client/netclient-daemonset.yaml because my k3s is deployed with flannel wg backend so I just wanna run all the netmaker networks contained inside the container itself and not conflict with my host wg network.
don't you think we should use PVC for this instead of mounting directly to host paths to keep it kubernetes native? I can do some PR to get the manifests updated with pvc mounts but not sure it will be accepted by you or not.
j
this is for the netmaker node on the server. You cannot currently change the range of these ports. Netmaker on k8s will start joining networks at port 31851 and then iterate upwards, so we make 10 available for this purpose
you will need to remove hostNetwork: true from the yaml, but I believe some other changes are necessary as well...need to check
a
ok I understand. 31821 or 31851?
j
*31821
a
got it.
I was thinking remove the hostnetwork: true and also use the hostnetwork env variable that is available at https://github.com/gravitl/netmaker/blob/master/compose/docker-compose.reference.yml
j
for PVC, I could not find an easy way to do this with DaemonSet. However, you could use StatefulSet or something else, but then it's not guaranteed to run on all nodes
a
PVC with retention policy might do the trick actually. Not really need statefulset.
j
if you find a good way let me know, would be happy to take a PR on that. Agreed hostmount is not ideal
a
ok, I will let you know.
Oh for the SSL problem, somehow I couldn't able to make it work fully. Not sure what am I doing wrong here. I have nginx ingress controller which is doing the TLS termination but from that ingress controller to both the API and UI, I am just using non tls connection because everytime I try to go to https://netmaker-server:8081 without nginx ingress and any TLS terminationa, I get SSL method error. Same goes for the UI.
i think somehow the deployment yaml should take a secret name so that it can access the tls secret key that nginx and cert-manager generates and put it inside the nginx configuration of the container to have a fully secure stream from ingress until api.
j
I usually track this down by describing the certificate
do kubectl describe certificaterequest it will usually tell you where it's stuck
and typically it is because the certificate provider name is incorrect
a
netmaker is also using cert-manager to generate the certificates?
my ingress is generating the certificates perfectly and I am able to use https on the ingress end actually.
but from ingress to netmaker endpoint is not secure. because netmaker ssl is not working.
j
what i am wondering is, was the correct cert provider name used when creating the netmaker ingress?
a
yes I think so, because the acme challenge was successful and lets encrypt certificated was added to the secret. Go to -> vpn.logicbuff.com (oh nevermind, it is down now 😦 ) and you can check the certificate there. I used the letsencrypt production issuer.
j
hmmm ok
wait, you are using 8081 to access the api?
a
yes in the ingress I am using 8081 to acces the API backend service. But the ingress uses port 443 externally to the UI and public
@jolly-london-20127 have you checked if it is possible to disable host networking for the netclient daemon or not? Please let me know.