The optimization strategy of container technology in the deployment of wallet nodes

The optimization strategy of container technology in the deployment of wallet nodes

Because of business needs, it is necessary to deploy nodes that need to be supported by various blockchain public chains, that is, wallet services, to synchronize with global open nodes. Most wallet programs are immature, so manual maintenance and node data backup processing are often required. When sharing the experience of peers, I found that everyone is fully using container technology to deploy wallet nodes. Therefore, I am also here to share with you what container technology should pay attention to during the deployment of a single wallet service, and how to optimize it.

1. The optimization of mirroring, as the basic mirroring, generally choose the alpine mirroring as the benchmark. In general container optimization recommendations, a large number of practical recommendations are based on the short and compact OS size as the criterion. This is not wrong in itself, but we also have to consider that wallet services are currently immature components. Few benchmark tests have been done on alpine, and it is difficult to use the full wallet capabilities. In this regard, my strategy is to build my own image based on the official Dockerfile, and don't waste energy on image building.

2. Mirror distribution and construction may encounter the situation that GFW cannot be pulled down, especially the dependent environment component package cannot be pulled down. So my strategy is to lose one unit in Hong Kong and just build it.

3. The wallet is built in various languages, and there are many versions. The mirror image built on the Internet is strange. Therefore, if you can build it yourself, don't use others. Even if there is a better one, build it again based on the Dockerfile. This can help you sort out the steps and have a clue in your mind when you encounter a problem.

4. It is best to deploy once, you can reuse the deployment script repeatedly. Most of the ready-made ones on the Internet are not complete. I have compiled a copy and put it on github for your reference. Currently the blocks are below 1T, so prepare the size of the disk to mount the data in advance. Support k8s. https://github.com/xiaods/ethereum-client-k8s-setup

5. Originally, I have synchronized as a node, and the traffic will be relatively large, so the UDP port is blocked. But then I found out that no one would connect to my wallet node after closing UDP. If you think about it, the P2P network does not open the udp scanning port, and it will certainly not bring the node topology map. It should be opened. Note that for public clouds, the udp network port is easily blocked. A reasonable way is to ventilate with the service provider and understand the rules.

6. Because each wallet is an RPC service, according to business needs, multiple wallets can be deployed for wallets with a large number of requests, and RPC requests can be loaded by blocking one LB in the front.

7. The SSD disk is necessary, otherwise it may not be able to complete the synchronization of a super popular public chain like Ethererum in a month. Other public chains can be ignored, they are not very hot, including BitCoin chains are fast. So the hardest thing to say at the moment is Ethereum, and everything else is fine.

8. Using a cluster system such as Kubernetes to manage the cluster is the best solution. As the saying goes, the tools are good, and you get home early from get off work. Many people here who are in contact with Kubernetes for the first time are not very familiar with writing deployments and statefulsets. There is no way to do this. The only way is to practice more. Or refer to my deployment script. https://github.com/xiaods/ethereum-client-k8s-setup

9. For storage, the most important thing is node data, which is very important. Therefore, you can make a backup copy so that you can copy it directly when you copy the node. Because the wallet is a single point P2P, you can directly use the copied data, which is very convenient.

10. Because a large number of Docker versions have entered the new version of 18 years, the USER instruction will be added to the current image construction. The purpose here is to restrict the execution of the container process with a non-root account to prevent the disclosure of the rights of security information. So when you mount a directory, you must specify a user with a userid of 1000 to execute it at the cluster level, otherwise it will fail. Here 1000 is an empirical value, not a special setting. Please note that you can assign nobody directly.

Container technology has been widely adopted in the field of cloud computing, but if it can be used well, everyone should optimize it according to their own scenarios. Tools are dead, and container technology can only exert its power under the experience of business needs.

Original: mp.weixin.qq.com/s/K_YV4xZay...