Docker好好学四(Docker网络)

## 大纲

1、Docker 网络

1、Docker 网络

1.1 理解Docker 0

学习之前清空下前面的docker 镜像、容器

# 删除全部容器
$ docker rm -f $(docker ps -aq)

# 删除全部镜像
$ docker rmi -f $(docker images -aq)

ip add查看网络:

img

问题: docker 是如果处理容器网络访问的?

# 测试  运行一个tomcat
$ docker run -d -P --name tomcat01 tomcat

# 查看容器内部网络地址
$ docker exec -it 容器id ip addr

# 发现容器启动的时候会得到一个 eth0@if91 ip地址,docker分配!
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
261: eth0@if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever


# 思考? linux能不能ping通容器内部! 可以 容器内部可以ping通外界吗? 可以!
$ ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.069 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.074 ms
1234567891011121314151617181920212223 

        每启动一个docker容器,docker就会给docker容器分配一个ip,我们只要按照了docker,就会有一个docker0桥接模式,使用的技术是veth-pair技术!

https://www.cnblogs.com/bakari/p/10613710.html

# 我们发现这个容器带来网卡,都是一对对的
# veth-pair 就是一对的虚拟设备接口,他们都是成对出现的,一端连着协议,一端彼此相连
# 正因为有这个特性 veth-pair 充当一个桥梁,连接各种虚拟网络设备的
# OpenStac,Docker容器之间的连接,OVS的连接,都是使用evth-pair技术

两个容器之间是可以相互ping通。

网络模型图

img

结论:tomcat01和tomcat02公用一个路由器,docker0。

        所有的容器不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用ip。Docker使用的是Linux的桥接,宿主机是一个Docker容器的网桥 docker0。Docker中所有网络接口都是虚拟的,虚拟的转发效率高(内网传递文件)

只要容器删除,对应的网桥一对就没了!

        docker0不支持容器名互联。

1.2 自定义网络

docker network
connect     -- Connect a container to a network
create      -- Creates a new network with a name specified by the
disconnect  -- Disconnects a container from a network
inspect     -- Displays detailed information on a network
ls          -- Lists all the networks created by the user
prune       -- Remove all unused networks
rm          -- Deletes one or more networks

查看所有的docker网络

网络模式

bridge :桥接 docker(默认,自己创建也是用bridge模式)

none :不配置网络,一般不用

host :和所主机共享网络

container :容器网络连通(用得少!局限很大)

测试

# 我们直接启动的命令 --net bridge,而这个就是我们得docker0
# bridge就是docker0
$ docker run -d -P --name tomcat01 tomcat
等价于 => docker run -d -P --name tomcat01 --net bridge tomcat

# docker0,特点:默认,域名不能访问。 --link可以打通连接,但是很麻烦!
# 我们可以 自定义一个网络
$ docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet

img

$ docker network inspect mynet;

img

启动两个tomcat,再次查看网络情况

img

img

在自定义的网络下,服务可以互相ping通,不用使用–link

img

我们自定义的网络docker当我们维护好了对应的关系,推荐我们平时这样使用网络!

好处:

redis -不同的集群使用不同的网络,保证集群是安全和健康的

mysql-不同的集群使用不同的网络,保证集群是安全和健康的

img

1.3 网络连通

docker0 与 mynet是不同的网段,这两个是无法互相访问的,

 [root@abc:~]# docker run -d -P --name tomcat01 tomcat:7
39be579ed8f523c9b2adabbf5a41dc84e047d835ccc406a1f7be62523ac0d713
​ [root@abc:~]# docker run -d -P --name tomcat02 tomcat:7
4fcc99ed702bd7ca5d3601c5a3a64c414d54dbac0dade2532175d0f29f1b7745
​ [root@abc:~]# docker ps
CONTAINER ID   IMAGE      COMMAND             CREATED          STATUS          PORTS                      
4fcc99ed702b   tomcat:7   "catalina.sh run"   28 seconds ago   Up 27 seconds   0.0.0.0:49165->8080/tcp, ::02
39be579ed8f5   tomcat:7   "catalina.sh run"   32 seconds ago   Up 31 seconds   0.0.0.0:49164->8080/tcp, ::01
0e378f037a6d   tomcat:7   "catalina.sh run"   11 minutes ago   Up 11 minutes   0.0.0.0:49162->8080/tcp, ::-net-02
3be82f0ce707   tomcat:7   "catalina.sh run"   11 minutes ago   Up 11 minutes   0.0.0.0:49161->8080/tcp, ::-net-01
​ [root@abc:~]# docker exec -it tomcat01 ping tomcat-net-01
ping: tomcat-net-01: Name or service not known

两个tomcat之间无法直接连接。为了实现docker01网络中的中的tomcat01可以连接mynet网络中的tomcat-net-01.需要使用网络连通.

 [root@abc:~]# docker network connect --help

Usage:  docker network connect [OPTIONS] NETWORK CONTAINER

Connect a container to a network

Options:
      --alias strings           Add network-scoped alias for the container
      --driver-opt strings      driver options for the network
      --ip string               IPv4 address (e.g., 172.30.100.104)
      --ip6 string              IPv6 address (e.g., 2001:db8::33)
      --link list               Add link to another container
      --link-local-ip strings   Add a link-local address for the container
--------------------------------------------------------------------------------------
测试网络连接:   

​ [root@abc:~]# docker network connect mynet tomcat01   (网络 - 容器)
​ [root@abc:~]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "b08530c7ee8a80f4795a9213a4270f5c852dd4e62816c40c3094ab94e88af2fe",
        "Created": "2022-06-06T11:12:06.231663688+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "0e378f037a6ded147dec691d750d4b8762cc10a534f3062e59ac140e3927fa7d": {
                "Name": "tomcat-net-02",
                "EndpointID": "0ba0c9bfa56f8901253c6e2329151ccf8ef036adf216e4b8ab3882bd71a8101c",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            },
            这个就是上面加入的容器
            "39be579ed8f523c9b2adabbf5a41dc84e047d835ccc406a1f7be62523ac0d713": {
                "Name": "tomcat01",
                "EndpointID": "21b6673eacf202800746e120fc7cb26ec364006610f4223d548031a4d6c0e24b",
                "MacAddress": "02:42:c0:a8:00:04",
                "IPv4Address": "192.168.0.4/16",
                "IPv6Address": ""
            },
            "3be82f0ce707b91699a8e1077c15fa3a212886f0ad7b78819a53d6ba5e7a96b7": {
                "Name": "tomcat-net-01",
                "EndpointID": "4de3d3e11cedced52a6bc3a982f4710dd3212a40acd0ac3cd2bdbff0fc334952",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

# 再次测试时候可以ping
​ [root@abc:~]# docker exec -it  tomcat01 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.084 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.086 ms

1.4 部署Redis集群

image20210508221715222

上面是主力机,下面是备用机

如果上面挂了,下面的就要去替代上面的。

 [root@abc:~]# docker network create redis --subnet 172.38.0.0/16
3cc89fec59aa31cf59f6728eed12f863de06c52ae4939bba4831146c09d46aba
​ [root@abc:~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
e0f53dbb0281   bridge    bridge    local
db8b998a0ea4   host      host      local
b08530c7ee8a   mynet     bridge    local
ce321a8c285c   none      null      local
3cc89fec59aa   redis     bridge    local   这个就是新建的网络
​ [root@abc:~]# docker network inspect redis
[
    {
        "Name": "redis",
        "Id": "3cc89fec59aa31cf59f6728eed12f863de06c52ae4939bba4831146c09d46aba",
        "Created": "2022-06-06T11:48:06.449548045+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.38.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
-------------------------------------------------------------------------
    通过脚本创建六个redis配置
    直接复制到命令行运行即可


for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes 
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
-----------------------------------------------------------------------------
​ [root@abc:~]# cd /mydata
[root@abc:/mydata]# ll
total 4
drwxr-xr-x 8 root root 4096 Jun  6 11:51 redis
​ [root@abc:/mydata]# cd redis
​ [root@abc:/mydata/redis]# ll
total 24
drwxr-xr-x 3 root root 4096 Jun  6 11:51 node-1
drwxr-xr-x 3 root root 4096 Jun  6 11:51 node-2
drwxr-xr-x 3 root root 4096 Jun  6 11:51 node-3
drwxr-xr-x 3 root root 4096 Jun  6 11:51 node-4
drwxr-xr-x 3 root root 4096 Jun  6 11:51 node-5
drwxr-xr-x 3 root root 4096 Jun  6 11:51 node-6
​ [root@abc:/mydata/redis]# cd node-1
​ [root@abc:/mydata/redis/node-1]# ls
conf
​ [root@abc:/mydata/redis/node-1]# cd conf
​ [root@abc:/mydata/redis/node-1/conf]# ls
redis.conf
​ [root@abc:/mydata/redis/node-1/conf]# cat redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes 
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.11
cluster-announce-port 6379         这个是暴露的端口
cluster-announce-bus-port 16379
appendonly yes
[root@abc:/mydata/redis/node-1/conf]# 
---------------------------------------------------------------------------------
    上面就是创建的6个redis配置
    下面开始运行创建的6个节点

docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /mydata/redis/node-3/data:/data \
-v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /mydata/redis/node-4/data:/data \
-v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
-v /mydata/redis/node-5/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf




# 创建集群
​ /data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
    打印的命令行
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: ebfbbc88a83fa84c3d73fe048fc37ab82841fa0e 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: ed7bac9d22ce436c650f9a51c3ae7a3aa84a7537 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: 2f1fdb553165c31c599f5cdbac5a42fa15901602 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: d2edadd37c2b243ffa3a13ac7bb352062160f4b8 172.38.0.14:6379
   replicates 2f1fdb553165c31c599f5cdbac5a42fa15901602
S: 98bec4772bd3cdc7335111f75bc3f6b6ad2fae26 172.38.0.15:6379
   replicates ebfbbc88a83fa84c3d73fe048fc37ab82841fa0e
S: 6e8dda893836d8182af1a7cbe0ff71b32d0743ef 172.38.0.16:6379
   replicates ed7bac9d22ce436c650f9a51c3ae7a3aa84a7537
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: ebfbbc88a83fa84c3d73fe048fc37ab82841fa0e 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 98bec4772bd3cdc7335111f75bc3f6b6ad2fae26 172.38.0.15:6379
   slots: (0 slots) slave
   replicates ebfbbc88a83fa84c3d73fe048fc37ab82841fa0e
S: d2edadd37c2b243ffa3a13ac7bb352062160f4b8 172.38.0.14:6379
   slots: (0 slots) slave
   replicates 2f1fdb553165c31c599f5cdbac5a42fa15901602
S: 6e8dda893836d8182af1a7cbe0ff71b32d0743ef 172.38.0.16:6379
   slots: (0 slots) slave
   replicates ed7bac9d22ce436c650f9a51c3ae7a3aa84a7537
M: 2f1fdb553165c31c599f5cdbac5a42fa15901602 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: ed7bac9d22ce436c650f9a51c3ae7a3aa84a7537 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

然后redis就部署完成了。

测试搭建的redis:

 /data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3            ​集群节点数量3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:360
cluster_stats_messages_pong_sent:359
cluster_stats_messages_sent:719
cluster_stats_messages_ping_received:354
cluster_stats_messages_pong_received:360
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:719
​ 127.0.0.1:6379> cluster nodes
e3b0f799f788ea426dcf1c19e34c745fb66cae69 172.38.0.16:6379@16379 slave cabf766d1ce0954661eff7e7c02f2f3a7978f2b5 0 1654488329895 6 connected
cabf766d1ce0954661eff7e7c02f2f3a7978f2b5 172.38.0.12:6379@16379 master - 0 1654488329595 2 connected 5461-10922
82e39cc9fe0402dd2b221a7730a2ca5a56a65e5e 172.38.0.11:6379@16379 myself,master - 0 1654488328000 1 connected 0-5460
ea39590e677f3ead82b33573dc1704ba29312d01 172.38.0.13:6379@16379 master - 0 1654488329595 3 connected 10923-16383
b30f555ac1dbaed6102d574d3ca3bc6bc00c46a3 172.38.0.14:6379@16379 slave ea39590e677f3ead82b33573dc1704ba29312d01 0 1654488329000 4 connected
c6659b50a87703ff83950e78363fdaac4f5ad761 172.38.0.15:6379@16379 slave 82e39cc9fe0402dd2b221a7730a2ca5a56a65e5e 0 1654488328893 5 connected
127.0.0.1:6379> 
end
  • 作者:旭仔(联系作者)
  • 发表时间:2024-06-09 13:58
  • 版权声明:自由转载-非商用-非衍生-保持署名
  • 转载声明:如果是转载栈主转载的文章,请附上原文链接
  • 公众号转载:请在文末添加作者公众号二维码(公众号二维码见右边,欢迎关注)
  • 评论