{"id":293,"date":"2019-08-30T15:14:20","date_gmt":"2019-08-30T18:14:20","guid":{"rendered":"http:\/\/www.55bet-pro.com\/laboratorios\/labtmc\/?page_id=293"},"modified":"2019-08-30T16:04:13","modified_gmt":"2019-08-30T19:04:13","slug":"cluster","status":"publish","type":"page","link":"http:\/\/www.55bet-pro.com\/laboratorios\/labtmc\/cluster","title":{"rendered":"Cluster"},"content":{"rendered":"\n
Acesso ao Cluster<\/a><\/p>\n Cluster – Comandos de Uso<\/strong><\/p>\n Administra\u00e7\u00e3o:<\/strong><\/p>\n Admin \u2013 Config. Para desligar remoto Tutorial – Inclus\u00e3o de novas m\u00e1quinas<\/a><\/p>\n Comandos para ligar nodos remotamente<\/strong><\/p>\n Ligar cpus quadcore (0 a 6)<\/strong><\/p>\n ligarpc0<\/p>\n ligarpc1<\/p>\n ligarpc2<\/p>\n ligarpc3<\/p>\n ligarpc4<\/p>\n ligarpc5<\/p>\n ligarpc6<\/p>\n<\/li>\n<\/ul>\n Utiliza\u00e7\u00e3o do comando screen<\/strong><\/p>\n Criar screen:<\/strong><\/p>\n screen -S nomedascreen<\/p>\n Listar screen:<\/strong><\/p>\n screen -ls<\/p>\n Abrir novas janelas dentro do screen:<\/strong><\/p>\n crtl + a + c<\/p>\n Alternar as janelas abertas na screen:<\/p>\n crtl + a + 0<\/p>\n crtl + a + 1<\/p>\n \u2026<\/p>\n crtl + a + 9<\/p>\n Desanexar screen<\/strong><\/p>\n crtl +a + d<\/p>\n Desanexar screen (via linha de comando)<\/strong><\/p>\n screen -d nomedascreen<\/p>\n Reanexar screen (via linha de comando)<\/strong><\/p>\n screen -r nomedascreen<\/p>\n Modo scroll em uma screen<\/strong><\/p>\n crtl + a+ esc (somente esc para sair)<\/p>\n \u00a0<\/p>\n Cluster Open MP<\/strong><\/p>\n Para rodar o cluster-openmp, crie um arquivo chamado mpd.hosts, o qual cont\u00e9m o nome dos nodos:<\/p>\n Ex: nedit mpd.hosts<\/p>\n e escreva<\/p>\n compute-0-0<\/p>\n compute-0-1<\/p>\n Crie tamb\u00e9m o arquivo kmp_cluster.ini, contendo a seguinte linha:<\/p>\n \u2013process_threads=2 \u2013processes=2 \u2013hostfile=mpd.hosts \u2013launch=ssh \u2013sharable_heap=100M<\/p>\n (as vari\u00e1veis podem ser ajustadas).<\/p>\n Coloque os arquivos mpd.hosts e kmp_cluster.ini na mesma pasta onde est\u00e1 o programa. Em seguida, acesse o nodo que est\u00e1 no cabe\u00e7alho do arquivo mpd.hosts e rode o programa com:<\/p>\n ifort -cluster-openmp programa.f<\/p>\n<\/div>\n Expens\u00e3o do uso da mem\u00f3ria<\/strong><\/p>\n ulimit -s 2048000<\/p>\n export KMP_STACKSIZE=2048000000<\/p>\n (\u00c9 necess\u00e1rio refazer o comando a cada login.)<\/p>\n ifort -mcmodel=XXX \u00a0-shared-intel program.f<\/p>\n onde XXX pode ser \u00a0“small”, “medium” ou “large” . O comando \u00a0-shared-intel \u00e9 usado para processadores intel.<\/p>\n \u00a0<\/p>\n Tutoriais:<\/strong><\/p>\n Como configurar o Cluster<\/strong><\/p>\n Cluster install:<\/p>\n Modo AHCI (Trocar para ATA ap\u00f3s instalado)<\/p>\n OBS: DVD interno n\u00e3o funciona. Usar Gaveta USB<\/p>\n OBS2: Se teclado usb n\u00e3o funciona durante install, usar tecl. Ps2<\/p>\n Pacotes Rocks:<\/em><\/p>\n NAME VERSION ARCH ENABLED<\/p>\n sge: 6.1.1 x86_64 yes ( job queueing system)<\/p>\n os: 6.1.1 x86_64 yes (required) CentOS 6.5 w\/updates pre-applied<\/p>\n kernel: 6.1.1 x86_64 yes (required) Rocks Bootable Kernel<\/p>\n ganglia: 6.1.1 x86_64 yes Cluster monitoring system from UCB<\/p>\n web-server: 6.1.1 x86_64 yes (Rocks Web Server Roll)<\/p>\n area51: 6.1.1 x86_64 yes (System security related services and utilities)<\/p>\n base: 6.1.1 x86_64 yes (required) Rocks Base Roll<\/p>\n hpc: 6.1.1 x86_64 yes (Rocks HPC Roll)<\/p>\n Desabilitar Hard Boot:<\/em><\/p>\n How do I disable the feature that reinstalls compute nodes after a hard reboot?<\/em><\/p>\n When compute nodes experience a hard reboot (e.g., when the compute node is reset by pushing the power button or after a power failure), they will reformat the root file system and reinstall their base operating environment.<\/p>\n To disable this feature:<\/p>\n Login to the frontend<\/p>\n Create a file that will override the default:<\/p>\n # cd \/export\/rocks\/install<\/p>\n # cp rocks-dist\/arch\/build\/nodes\/auto-kickstart.xml \\<\/p>\n site-profiles\/6.1.1\/nodes\/replace-auto-kickstart.xml<\/p>\n Where arch is “i386” or “x86_64”.<\/p>\n Edit the file site-profiles\/6.1.1\/nodes\/replace-auto-kickstart.xml<\/p>\n Remove the line:<\/p>\n <package>rocks-boot-auto<package><\/p>\n Rebuild the distribution:<\/p>\n # cd \/export\/rocks\/install<\/p>\n # rocks create distro<\/p>\n Reinstall all your compute nodes<\/p>\n Note<\/p>\n An alternative to reinstalling all your compute nodes is to login to each compute node and execute:<\/p>\n # \/etc\/rc.d\/init.d\/rocks-grub stop<\/p>\n # \/sbin\/chkconfig –del rocks-grub<\/p>\n Instalar freenx server:<\/em><\/p>\n Currently there is a version of NX and FreeNX in the CentOS Extras repository for both CentOS 5 and CentOS 6.<\/p>\n nano \/etc\/yum.repos.d\/CentOS-Base.repo<\/p>\n To install the stable version of NX \/ FreeNX, issue this command from the server:<\/p>\n [root@server ~]# yum install nx freenx<\/p>\n Admin \u2013 Config. Para desligar remoto:<\/p>\n chmod a+s \/sbin\/shutdown<\/p>\n ethtool eth0<\/p>\n ethtool -s eth0 wol g<\/p>\n echo ‘\/usr\/sbin\/ethtool -s eth0 wol g’ >> \/etc\/rc.d\/rc.local<\/p>\n Peri\u00f3dicos\u00a0<\/strong><\/p>\n
chmod a+s \/sbin\/shutdown
ethtool eth0
ethtool -s eth0 wol g
echo ‘\/usr\/sbin\/ethtool -s eth0 wol g’ >> \/etc\/rc.d\/rc.loca<\/p>\n\n
\n
\n
\n
\n