Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

4510

July 19th, 2016 14:00

Communication error adding Slave MDM

Fresh install with EMC ScaleIO v2.0-6035.0 using the vcenter plugin on VMware ESXi, 6.0.0, 3380124

During the deployment process, I get this error adding the slave MDM

FAILED (Error code: COMMUNICATION_ERROR) : Add Slave MDM


I already moved to a different set of esx servers (different hardware vendor) and got the exact same error.


I can ping between the hosts using the data IPs below.


Any ideas?


-------

Jul 19, 2016 8:17:00 PM Host: esx-a1-3.ilab.plexxi.com - SUCCEEDED: Install ScaleIO Module [IP: 172.25.200.3, module: TB]

Jul 19, 2016 8:17:00 PM Host: esx-a1-2.ilab.plexxi.com - SUCCEEDED: Install ScaleIO Module [IP: 172.25.200.2, module: SLAVE_MDM]

Jul 19, 2016 8:17:00 PM Host: esx-a1-1.ilab.plexxi.com - SUCCEEDED: Install ScaleIO Module [IP: 172.25.200.1, module: MASTER_MDM]

Jul 19, 2016 8:17:00 PM Host: esx-a1-1.ilab.plexxi.com - STARTED: Add Master MDM [Mgmt IPs: [172.25.200.1], data IPs: [10.1.100.1, 10.1.101.1]]

Jul 19, 2016 8:17:08 PM Host: esx-a1-1.ilab.plexxi.com - SUCCEEDED: Add Master MDM [Mgmt IPs: [172.25.200.1], data IPs: [10.1.100.1, 10.1.101.1]]

Jul 19, 2016 8:17:08 PM Host: esx-a1-2.ilab.plexxi.com - STARTED: Add Slave MDM [Mgmt IP: 172.25.200.2, data IP 1: 10.1.100.2, data IP 2: 10.1.101.2]

Jul 19, 2016 8:17:11 PM Host: esx-a1-2.ilab.plexxi.com - FAILED (Error code: COMMUNICATION_ERROR) : Add Slave MDM [Mgmt IP: 172.25.200.2, data IP 1: 10.1.100.2, data IP 2: 10.1.101.2]

Jul 19, 2016 8:23:48 PM Host: esx-a1-2.ilab.plexxi.com - STARTED: Add Slave MDM [Mgmt IP: 172.25.200.2, data IP 1: 10.1.100.2, data IP 2: 10.1.101.2]

Jul 19, 2016 8:23:52 PM Host: esx-a1-2.ilab.plexxi.com - FAILED (Error code: COMMUNICATION_ERROR) : Add Slave MDM [Mgmt IP: 172.25.200.2, data IP 1: 10.1.100.2, data IP 2: 10.1.101.2]

Jul 19, 2016 8:37:48 PM Host: esx-a1-2.ilab.plexxi.com - STARTED: Add Slave MDM [Mgmt IP: 172.25.200.2, data IP 1: 10.1.100.2, data IP 2: 10.1.101.2]

------

73 Posts

April 19th, 2017 11:00

This should have been done prior to installing the MDM/SDS backend. The plugin shouldn't let you continue until the SDC is installed.

Either way, go to your ScaleIO plugin on the vsphere web client. There will be a link for installing the SDC on the landing page there.

If you can't install it that way, you can do it manually. You downloaded it with the other ScaleIO software. Find the vib that is appropriate for your platform (5.5 or 6), copy it to your host and install it like any other vib on an ESXi host. Reboot the host after the initial install.

7 Posts

July 19th, 2016 15:00

A bit more from the details on the esx host its failing on

......

Completed: Deploy VM ScaleIO-172.25.200.2

Completed: Perform initial configuration for VM: 172.25.200.2

Completed: Install ScaleIO LIA module

Failed: Add SLAVE_MDM to MDM cluster (ScaleIO - Communication error between core components)

Waiting: Install ScaleIO SDS module

......

306 Posts

July 21st, 2016 00:00

Hi Simon,

Can you please try to SSH into the deployed SVMs and try to ping all 3 IPs (management+data) in each direction? Please don't run ping from the ESXi, rather from the SVMs and paste the outputs here.

Thanks!

Pawel

306 Posts

December 1st, 2016 04:00

Hi Jonathan,

No, I haven't heard from Simon. In most cases the issue lies in the network configuration on the ESXi.

54 Posts

April 9th, 2017 12:00

Hi

Am also facing same error i can able to ping both management and data link IP's from my SVM's

Regards

Suriya M

73 Posts

April 10th, 2017 07:00

Hi Suriya_007 (and everyone else),

In a case like this,being able to ping is a good start. However, the key here is that it succeeds when creating the Master MDM, as it is still using SSH (port 22) to do the work. When the Master MDM is then up and running, the two ports that the MDM uses, 9011 and 6611, will be taking over for adding all other components (Slave MDMs and SDSes) and SSH is no longer the main communication component here.

Verify that the two ports listed above are open in the firewall as that is most likely the cause of failure here. Also verify that the MTU size on all components is set to the same size, both on the SVM side and the VMware vswitch side. Less likely but still need to check... SSH to the Slave MDM SVM and make sure that the MDM process is up and running.

Hope that helps

54 Posts

April 14th, 2017 12:00

Hi Rickh,

As per the above i have open both the port but i can't able to ping primary data link ip to secondery datalink ip and try to check MDM server with service mdm status command but no service is running like this am deploying VMware scaleio deployment

module: SLAVE_MDM]

Apr 14, 2017 7:21:28 PM Host: mcdcesxi09.masscloudspace.local - STARTED: Install ScaleIO Module [IP: 10.20.30.14, module: MASTER_MDM]

Apr 14, 2017 7:21:59 PM Host: mcdcesxi11.masscloudspace.local - SUCCEEDED: Install ScaleIO Module [IP: 10.20.30.16, module: TB]

Apr 14, 2017 7:21:59 PM Host: mcdcesxi10.masscloudspace.local - SUCCEEDED: Install ScaleIO Module [IP: 10.20.30.15, module: SLAVE_MDM]

Apr 14, 2017 7:21:59 PM Host: mcdcesxi09.masscloudspace.local - SUCCEEDED: Install ScaleIO Module [IP: 10.20.30.14, module: MASTER_MDM]

Apr 14, 2017 7:21:59 PM Host: mcdcesxi09.masscloudspace.local - STARTED: Add Master MDM [Mgmt IPs: [10.20.30.14], data IPs: [10.20.70.13]]

Apr 14, 2017 7:22:05 PM Host: mcdcesxi09.masscloudspace.local - SUCCEEDED: Add Master MDM [Mgmt IPs: [10.20.30.14], data IPs: [10.20.70.13]]

Apr 14, 2017 7:22:05 PM Host: mcdcesxi10.masscloudspace.local - STARTED: Add Slave MDM [Mgmt IP: 10.20.30.15, data IP 1: 10.20.70.14, data IP 2: null]

Apr 14, 2017 7:22:10 PM Host: mcdcesxi10.masscloudspace.local - FAILED (Error code: COMMUNICATION_ERROR) : Add Slave MDM [Mgmt IP: 10.20.30.15, data IP 1: 10.20.70.14, data IP 2: null]

Apr 14, 2017 7:36:09 PM Host: mcdcesxi10.masscloudspace.local - STARTED: Add Slave MDM [Mgmt IP: 10.20.30.15, data IP 1: 10.20.70.14, data IP 2: null]

Apr 14, 2017 7:36:13 PM Host: mcdcesxi10.masscloudspace.local - FAILED (Error code: COMMUNICATION_ERROR) : Add Slave MDM [Mgmt IP: 10.20.30.15, data IP 1: 10.20.70.14, data IP 2: null]

Thanks in Advance

Suriya

54 Posts

April 17th, 2017 01:00

Can i have update on the above request plz

54 Posts

April 17th, 2017 05:00

Hi Implementation stopped in getting with below error

Completed: Deploy VM ScaleIO-10.20.30.15

Completed: Perform initial configuration for VM: 10.20.30.15

Completed: Install ScaleIO LIA module

Failed: Add SLAVE_MDM to MDM cluster (ScaleIO - Communication error between core components)

Waiting: Install ScaleIO SDS module

Waiting: Install ScaleIO RFCACHE module

Waiting: Configure SDC driver on ESX

Waiting: Create VMDK ScaleIO-VMDK-2074960421 on datastore esxi_10_local_ds_02

Waiting: Create VMDK ScaleIO-VMDK-2074960422 on datastore esxi_10_local_ds_01

Waiting: Add SDS device ScaleIO-7bad6225 to Storage Pool MSSI_SP

Waiting: Add SDS device ScaleIO-7bad6226 to Storage Pool MSSI_SP

Any suggestion please

Regards,

Suriya M

73 Posts

April 17th, 2017 11:00

Hi Suriya,

According to your own update, you are not able to ping between primary and secondary MDMs. I assume that is what you meant by "i can't able to ping primary data link ip to secondery datalink ip". 

Can you ping the following:

10.20.30.14 <--> 10.20.30.15

10.20.70.13<--> 10.20.70.14

To check if the mdm service is running, just type in "ps -ef |grep mdm". If it isn't running, type in "/opt/emc/scaleio/mdm/bin/create_service.sh", and check again.

You can also look at the /opt/emc/scaleio/mdm/logs/trc.0 to see if there are any clues for why you are unable to communicate. The problem here is definitely because of communication. So either the IP and network is not setup properly to talk, or the service is not up and running.

54 Posts

April 17th, 2017 12:00

RickH wrote:

Hi Suriya,

According to your own update, you are not able to ping between primary and secondary MDMs. I assume that is what you meant by "i can't able to ping primary data link ip to secondery datalink ip". 

Can you ping the following:

10.20.30.14 <--> 10.20.30.15

10.20.70.13<--> 10.20.70.14

To check if the mdm service is running, just type in "ps -ef |grep mdm". If it isn't running, type in "/opt/emc/scaleio/mdm/bin/create_service.sh", and check again.

You can also look at the /opt/emc/scaleio/mdm/logs/trc.0 to see if there are any clues for why you are unable to communicate. The problem here is definitely because of communication. So either the IP and network is not setup properly to talk, or the service is not up and running.

Hi Rickh

Thanks for your update i have checked on my end and found addin the output below

i can able to ping 10.20.70.14 to 15 is reachable, But i can't able to ping 10.20.70.13 to 14 is not reachable. I have checked the mdm command its working fine on all the host


adding trc.0 output kindly have a look on it and suggest me continue the installation





17/04 08:48:46.470542 ---------- Process started. Version private ScaleIO R2_0.12000.122_Release Dec 23 2016. PID 3738 ----------

17/04 08:48:46.471296 (nil):mosSecurityLayer_Load:00327: Loading synchronously using default security configuration

17/04 08:48:46.471302 (nil):mosSecurity_Init:00132: Security is disabled

17/04 08:48:46.471351 (nil):mosT10Dif_Init:00705: (T10DIF) T10DIF layer using CPU (PCLMUL) acceleration !!!

17/04 08:48:46.471389 (nil):mosConf_Trace:00450: Conf: rep_ignore_lock_error = 0 (0)

17/04 08:48:46.471393 (nil):mosConf_Trace:00467: Conf: rep_dev_name = /opt/emc/scaleio/mdm/rep/mdm_rep.bin ()

17/04 08:48:46.471395 (nil):mosConf_Trace:00467: Conf: rep_shm_name = scaleio_mdm_rep_shm (emc_mdm_repository_shm)

17/04 08:48:46.471396 (nil):mosConf_Trace:00434: Conf: rep_umt_num = 100 (100)

17/04 08:48:46.471397 (nil):mosConf_Trace:00434: Conf: rep_crash_after_num_writes = 0 (0)

17/04 08:48:46.471399 (nil):mosConf_Trace:00434: Conf: mdm_debug_assert = 0 (0)

17/04 08:48:46.471400 (nil):mosConf_Trace:00450: Conf: mdm_multihead_debug_crash_active = 0 (0)

17/04 08:48:46.471401 (nil):mosConf_Trace:00434: Conf: mdm_multihead_debug_crash_count = 20000 (20000)

17/04 08:48:46.471403 (nil):mosConf_Trace:00450: Conf: mdm_multihead_allow_rebuilds = 1 (1)

17/04 08:48:46.471404 (nil):mosConf_Trace:00450: Conf: mdm_multihead_defensive_asserts = 0 (0)

17/04 08:48:46.471405 (nil):mosConf_Trace:00450: Conf: mdm_multihead_counter_validation = 0 (0)

17/04 08:48:46.471406 (nil):mosConf_Trace:00450: Conf: mdm_multihead_move_to_una_on_primary_degraded = 0 (0)

17/04 08:48:46.471407 (nil):mosConf_Trace:00450: Conf: mdm_una_delay_on_first_connect = 1 (1)

17/04 08:48:46.471408 (nil):mosConf_Trace:00450: Conf: mdm_optimize_protection_time = 1 (1)

17/04 08:48:46.471409 (nil):mosConf_Trace:00434: Conf: mdm_rebuild_capacity_threshold_lbs = 20480 (20480)

17/04 08:48:46.471411 (nil):mosConf_Trace:00434: Conf: mdm_rebuild_ios_threshold = 10 (10)

17/04 08:48:46.471412 (nil):mosConf_Trace:00434: Conf: mdm_limit_vol_alloc_blocks_in_head = 1024 (1024)

17/04 08:48:46.471413 (nil):mosConf_Trace:00450: Conf: mdm_optimize_rebuild_division = 1 (1)

17/04 08:48:46.471414 (nil):mosConf_Trace:00450: Conf: mdm_tgt_debug_crash_active = 0 (0)

17/04 08:48:46.471415 (nil):mosConf_Trace:00434: Conf: mdm_tgt_debug_crash_count = 50 (50)

17/04 08:48:46.471416 (nil):mosConf_Trace:00450: Conf: mdm_tgt_defensive_asserts = 0 (0)

17/04 08:48:46.471418 (nil):mosConf_Trace:00450: Conf: mdm_tgt_defensive_bwc = 0 (0)

17/04 08:48:46.471419 (nil):mosConf_Trace:00434: Conf: mdm_tgt_default_test_total_time_seconds = 20 (20)

17/04 08:48:46.471420 (nil):mosConf_Trace:00450: Conf: mdm_tgt_default_test_include_write = 0 (0)

17/04 08:48:46.471421 (nil):mosConf_Trace:00434: Conf: mdm_tgt_default_test_io_size_kb = 8 (8)

17/04 08:48:46.471422 (nil):mosConf_Trace:00434: Conf: mdm_tgt_default_test_total_io_size_MB = 100 (100)

17/04 08:48:46.471423 (nil):mosConf_Trace:00434: Conf: mdm_dev_cap_threshold_pct__high = 80 (80)

17/04 08:48:46.471425 (nil):mosConf_Trace:00434: Conf: mdm_dev_cap_threshold_pct__critical = 90 (90)

17/04 08:48:46.471426 (nil):mosConf_Trace:00434: Conf: mdm_dev_min_capacity = 90 (90)

17/04 08:48:46.471427 (nil):mosConf_Trace:00434: Conf: mdm_tgt_write_sim_multiply = 1 (1)

17/04 08:48:46.471428 (nil):mosConf_Trace:00434: Conf: mdm_tgt_read_sim_multiply = 1 (1)

17/04 08:48:46.471429 (nil):mosConf_Trace:00434: Conf: mdm_default_spare_pct = 10 (10)

17/04 08:48:46.471430 (nil):mosConf_Trace:00434: Conf: mdm_num_cli_query_umts = 5 (5)

17/04 08:48:46.471432 (nil):mosConf_Trace:00434: Conf: mdm_num_cli_quick_command_umts = 1 (1)

17/04 08:48:46.471433 (nil):mosConf_Trace:00434: Conf: mdm_num_cli_tgt_command_umts = 10 (10)

17/04 08:48:46.471435 (nil):mosConf_Trace:00434: Conf: mdm_num_cli_volume_command_umts = 1 (1)

17/04 08:48:46.471436 (nil):mosConf_Trace:00434: Conf: usr_mng_num_command_umt = 1 (1)

17/04 08:48:46.471437 (nil):mosConf_Trace:00434: Conf: mdm_num_cli_miscellaneous_umts = 1 (1)

17/04 08:48:46.471438 (nil):mosConf_Trace:00417: Skipping printout of user_password_min_length

17/04 08:48:46.471439 (nil):mosConf_Trace:00434: Conf: user_bad_global_login_threshold = 3 (3)

17/04 08:48:46.471441 (nil):mosConf_Trace:00434: Conf: user_session_timeout_secs = 600 (600)

17/04 08:48:46.471442 (nil):mosConf_Trace:00434: Conf: user_session_hard_timeout_secs = 28800 (28800)

17/04 08:48:46.471443 (nil):mosConf_Trace:00450: Conf: user_session_timeout_reset_on_access = 1 (1)

17/04 08:48:46.471444 (nil):mosConf_Trace:00434: Conf: mdm_num_cli_query_buffer_size = 256 (256)

17/04 08:48:46.471445 (nil):mosConf_Trace:00434: Conf: mdm_num_buckets_in_tgt_train = 64 (64)

17/04 08:48:46.471446 (nil):mosConf_Trace:00434: Conf: mdm_rebuild_concurrent_io_limit = 1 (1)

17/04 08:48:46.471448 (nil):mosConf_Trace:00434: Conf: mdm_rebuild_max_bw_kb = 10240 (10240)

17/04 08:48:46.471449 (nil):mosConf_Trace:00434: Conf: mdm_rebuild_quiet_period_ms = 2000 (2000)

17/04 08:48:46.471450 (nil):mosConf_Trace:00434: Conf: mdm_rebuild_app_bw_threshold_kb = 10240 (10240)

17/04 08:48:46.471451 (nil):mosConf_Trace:00434: Conf: mdm_rebuild_app_iops_threshold = 10 (10)

17/04 08:48:46.471452 (nil):mosConf_Trace:00434: Conf: mdm_rebalance_concurrent_io_limit = 1 (1)

17/04 08:48:46.471453 (nil):mosConf_Trace:00434: Conf: mdm_rebalance_max_bw_kb = 10240 (10240)

17/04 08:48:46.471455 (nil):mosConf_Trace:00434: Conf: mdm_rebalance_quiet_period_ms = 2000 (2000)

17/04 08:48:46.471456 (nil):mosConf_Trace:00434: Conf: mdm_rebalance_app_bw_threshold_kb = 10240 (10240)

17/04 08:48:46.471457 (nil):mosConf_Trace:00434: Conf: mdm_rebalance_app_iops_threshold = 10 (10)

17/04 08:48:46.471458 (nil):mosConf_Trace:00434: Conf: mdm_num_vtrees_per_deletion_operation = 500 (500)

17/04 08:48:46.471459 (nil):mosConf_Trace:00450: Conf: mdm_throt_random_queue = 1 (1)

17/04 08:48:46.471460 (nil):mosConf_Trace:00450: Conf: mdm_zero_pad_enabled = 0 (0)

17/04 08:48:46.471462 (nil):mosConf_Trace:00434: Conf: mdm_thin_capacity_ratio = 10 (10)

17/04 08:48:46.471463 (nil):mosConf_Trace:00434: Conf: mdm_remote_syslog_buffer_size = 1024 (1024)

17/04 08:48:46.471464 (nil):mosConf_Trace:00434: Conf: rsyslog_time_between_reconnects_in_millis = 5000 (5000)

17/04 08:48:46.471465 (nil):mosConf_Trace:00434: Conf: rsyslog_max_num_of_instances = 4 (4)

17/04 08:48:46.471466 (nil):mosConf_Trace:00434: Conf: mdm__sched_thread_stuck_thresh_milli = 500 (500)

17/04 08:48:46.471467 (nil):mosConf_Trace:00434: Conf: mdm__sched_guard_panic_ratio = 3 (3)

17/04 08:48:46.471469 (nil):mosConf_Trace:00434: Conf: mdm_umt_run_thresh_ms = 1000 (1000)

17/04 08:48:46.471470 (nil):mosConf_Trace:00450: Conf: mdm_umt_run_should_panic_on_thresh = 0 (0)

17/04 08:48:46.471471 (nil):mosConf_Trace:00434: Conf: sds_throtlling_buffers = 128 (128)

17/04 08:48:46.471472 (nil):mosConf_Trace:00450: Conf: mdm_use_fake_swid = 0 (0)

17/04 08:48:46.471473 (nil):mosConf_Trace:00450: Conf: mdm_decode_license_internally = 0 (0)

17/04 08:48:46.471474 (nil):mosConf_Trace:00450: Conf: mdm_always_save_license_key = 0 (0)

17/04 08:48:46.471475 (nil):mosConf_Trace:00467: Conf: dbg_break_on_entry =  ()

17/04 08:48:46.471477 (nil):mosConf_Trace:00467: Conf: dbg_break_on_exit =  ()

17/04 08:48:46.471478 (nil):mosConf_Trace:00467: Conf: authentication_certificate_file_name = mdm_authentication_certificate.pem (mdm_authentication_certificate.pem)

17/04 08:48:46.471479 (nil):mosConf_Trace:00434: Conf: mdm_con_matrix_sleep_between_analyses = 1000 (1000)

17/04 08:48:46.471480 (nil):mosConf_Trace:00434: Conf: mdm_con_matrix_default_num_tgts = 32 (32)

17/04 08:48:46.471481 (nil):mosConf_Trace:00434: Conf: mdm_con_matrix_num_added_on_realloc = 32 (32)

17/04 08:48:46.471483 (nil):mosConf_Trace:00434: Conf: mdm_con_matrix_policy_cooloff = 5000 (5000)

17/04 08:48:46.471485 (nil):mosConf_Trace:00450: Conf: mdm_con_matrix_grant_by_default = 0 (0)

17/04 08:48:46.471486 (nil):mosConf_Trace:00450: Conf: mdm_allow_adding_old_tgt = 0 (0)

17/04 08:48:46.471487 (nil):mosConf_Trace:00434: Conf: mdm_num_failed_tgt_up_processes_before_sleeping = 5 (5)

17/04 08:48:46.471488 (nil):mosConf_Trace:00434: Conf: mdm_sleep_time_after_multiple_reconfig_failure_sec = 15 (15)

17/04 08:48:46.471489 (nil):mosConf_Trace:00434: Conf: mdm_max_num_objects_in_qpoll_response = 1000 (1000)

17/04 08:48:46.471491 (nil):mosConf_Trace:00450: Conf: mdm_allow_obfuscation = 0 (0)

17/04 08:48:46.471492 (nil):mosConf_Trace:00450: Conf: mdm_enable_dev_metadata_polling = 1 (1)

17/04 08:48:46.471493 (nil):mosConf_Trace:00434: Conf: mdm_delay_handling_client_request_milliseconds = 0 (0)

17/04 08:48:46.471494 (nil):mosConf_Trace:00434: Conf: mdm_umt_num = 1650 (1650)

17/04 08:48:46.471495 (nil):mosConf_Trace:00434: Conf: mdm_umt_stack_size_kb = 32 (32)

17/04 08:48:46.471496 (nil):mosConf_Trace:00434: Conf: mdm_umt_os_thrd = 4 (4)

17/04 08:48:46.471498 (nil):mosConf_Trace:00434: Conf: mdm_umt_quota_per_os_thrd = 10 (10)

17/04 08:48:46.471499 (nil):mosConf_Trace:00450: Conf: mdm_umt_use_guard_page = 1 (1)

17/04 08:48:46.471500 (nil):mosConf_Trace:00450: Conf: mdm_net__important_umt = 1 (1)

17/04 08:48:46.471501 (nil):mosConf_Trace:00450: Conf: mdm_net__poll_timer = 0 (0)

17/04 08:48:46.471502 (nil):mosConf_Trace:00450: Conf: mdm_net__use_keepalive = 0 (0)

17/04 08:48:46.471503 (nil):mosConf_Trace:00434: Conf: mdm_net__worker_thread = 4 (4)

17/04 08:48:46.471504 (nil):mosConf_Trace:00434: Conf: mdm_net__max_priority_sends = 16 (16)

17/04 08:48:46.471506 (nil):mosConf_Trace:00434: Conf: mdm_net___close_quiet_socket_timeout = 0 (0)

17/04 08:48:46.471507 (nil):mosConf_Trace:00450: Conf: mdm_net__crash_on_send_timeout = 1 (1)

17/04 08:48:46.471508 (nil):mosConf_Trace:00450: Conf: mdm_net__use_loopback_fastpath = 1 (1)

17/04 08:48:46.471509 (nil):mosConf_Trace:00450: Conf: mdm_net__use_separate_sched = 0 (0)

17/04 08:48:46.471510 (nil):mosConf_Trace:00450: Conf: actor_role_is_manager = 1 (0)

17/04 08:48:46.471511 (nil):mosConf_Trace:00467: Conf: actor_rep_dev_name = /opt/emc/scaleio/mdm/rep/actor_rep.bin (actor_rep)

17/04 08:48:46.471513 (nil):mosConf_Trace:00467: Conf: actor_rep_shm_name = scaleio_actor_shm (emc_actor_shm)

17/04 08:48:46.471514 (nil):mosConf_Trace:00467: Conf: actor_local_voter_rep_dev_name = /opt/emc/scaleio/mdm/rep/actor_local_voter_rep.bin (voter_rep)

17/04 08:48:46.471515 (nil):mosConf_Trace:00467: Conf: actor_local_voter_rep_shm_name = scaleio_voter_shm (emc_actor_local_voter_shm)

17/04 08:48:46.471516 (nil):mosConf_Trace:00434: Conf: actor_max_asyncio_reqs = 200 (200)

17/04 08:48:46.471518 (nil):mosConf_Trace:00434: Conf: actor_asyncio_thrds = 1 (1)

17/04 08:48:46.471519 (nil):mosConf_Trace:00450: Conf: actor_asyncio_is_native = 0 (0)

17/04 08:48:46.471520 (nil):mosConf_Trace:00434: Conf: actor_cluster_port = 9011 (9011)

17/04 08:48:46.471521 (nil):mosConf_Trace:00434: Conf: actor_net__worker_thread = 1 (1)

17/04 08:48:46.471523 (nil):mosConf_Trace:00434: Conf: actor_net__max_priority_sends = 16 (16)

17/04 08:48:46.471524 (nil):mosConf_Trace:00450: Conf: actor_net__crash_on_send_timeout = 1 (1)

17/04 08:48:46.471525 (nil):mosConf_Trace:00450: Conf: actor_net__use_loopback_fastpath = 1 (1)

17/04 08:48:46.471526 (nil):mosConf_Trace:00434: Conf: actor_safe_lease_ms = 400 (400)

17/04 08:48:46.471527 (nil):mosConf_Trace:00434: Conf: actor_cede_time_ms = 500 (500)

17/04 08:48:46.471529 (nil):mosConf_Trace:00434: Conf: actor_min_time_between_sends_ms = 50 (50)

17/04 08:48:46.471530 (nil):mosConf_Trace:00434: Conf: actor_inactive_sock_time_milli = 300 (300)

17/04 08:48:46.471531 (nil):mosConf_Trace:00434: Conf: cluster_virtual_ip_publish_interval_sec = 30 (30)

17/04 08:48:46.471532 (nil):mosConf_Trace:00467: Conf: dbg_break_on_entry =  ()

17/04 08:48:46.471533 (nil):mosConf_Trace:00467: Conf: dbg_break_on_exit =  ()

17/04 08:48:46.471534 (nil):mosConf_Trace:00467: Conf: actor_certificate_file_name = mdm_management_certificate.pem (mdm_management_certificate.pem)

17/04 08:48:46.471537 (nil):mosConf_Trace:00467: Conf: actor_temp_certificate_file_name = mdm_temp_certificate.pem (mdm_temp_certificate.pem)

17/04 08:48:46.471538 (nil):mosConf_Trace:00417: Skipping printout of actor_private_key_password

17/04 08:48:46.471539 (nil):mosConf_Trace:00467: Conf: actor_new_private_key_file_name = mdm.key (mdm.key)

17/04 08:48:46.471540 (nil):mosConf_Trace:00450: Conf: actor_default_client_security = 1 (1)

17/04 08:48:46.471542 (nil):mosConf_Trace:00450: Conf: enable_slave_write_optimization = 1 (1)

17/04 08:48:46.471543 (nil):mosConf_Trace:00434: Conf: mdm__sched_thread_stuck_thresh_milli = 500 (500)

17/04 08:48:46.471544 (nil):mosConf_Trace:00434: Conf: mdm_sched_thread_stuck_allowed_milli = 4000 (4000)

17/04 08:48:46.471545 (nil):mosConf_Trace:00434: Conf: mdm_long_io_allowed_milli = 30000 (30000)

17/04 08:48:46.471547 (nil):mosConf_Trace:00434: Conf: mdm_long_io_warning_milli = 15000 (15000)

17/04 08:48:46.471548 (nil):mosConf_Trace:00450: Conf: mdm_long_operation_switchover = 1 (1)

17/04 08:48:46.471549 (nil):mosConf_Trace:00434: Conf: actor_sync_traces_time_millis = 7000 (7000)

17/04 08:48:46.471550 (nil):mosConf_Trace:00434: Conf: actor_membership_debug_crash_point = 0 (0)

17/04 08:48:46.471551 (nil):mosConf_Trace:00467: Conf: actor_local_voter_rep_dev_name = /opt/emc/scaleio/mdm/rep/actor_local_voter_rep.bin (voter_rep)

17/04 08:48:46.471553 (nil):mosConf_Trace:00467: Conf: actor_local_voter_rep_shm_name = scaleio_voter_shm (emc_voter_shm)

17/04 08:48:46.471554 (nil):mosConf_Trace:00434: Conf: voter_max_asyncio_reqs = 200 (200)

17/04 08:48:46.471555 (nil):mosConf_Trace:00434: Conf: voter_asyncio_thrds = 1 (1)

17/04 08:48:46.471557 (nil):mosConf_Trace:00450: Conf: voter_asyncio_is_native = 0 (0)

17/04 08:48:46.471558 (nil):mosConf_Trace:00434: Conf: actor_cluster_port = 9011 (9011)

17/04 08:48:46.471559 (nil):mosConf_Trace:00434: Conf: voter_net__worker_thread = 1 (1)

17/04 08:48:46.471560 (nil):mosConf_Trace:00434: Conf: voter_net__max_priority_sends = 16 (16)

17/04 08:48:46.471561 (nil):mosConf_Trace:00450: Conf: voter_net__crash_on_send_timeout = 1 (1)

17/04 08:48:46.471562 (nil):mosConf_Trace:00434: Conf: voter_grace_period_ms = 500 (500)

17/04 08:48:46.471564 (nil):mosConf_Trace:00434: Conf: voter_inactive_sock_time_100milli = 3 (3)

17/04 08:48:46.471565 (nil):mosConf_Trace:00434: Conf: cap_mgr_migrate_threshold_percent = 0 (0)

17/04 08:48:46.471566 (nil):mosConf_Trace:00434: Conf: cap_mgr_bulk_threshold_percent = 20 (20)

17/04 08:48:46.471567 (nil):mosConf_Trace:00434: Conf: cap_mgr_balance_mb_threshold = 8192 (8192)

17/04 08:48:46.471569 (nil):mosConf_Trace:00434: Conf: cap_mgr_balance_mb_threshold_comb_alloc = 8192 (8192)

17/04 08:48:46.471570 (nil):mosConf_Trace:00467: Conf: eventlog_filename = eventlog.db (eventlog.db)

17/04 08:48:46.471571 (nil):mosConf_Trace:00434: Conf: eventlog_queue_size = 1000 (1000)

17/04 08:48:46.471573 (nil):mosConf_Trace:00434: Conf: eventlog_writer_sleep_sec = 1 (1)

17/04 08:48:46.471574 (nil):mosConf_Trace:00434: Conf: eventlog_events_per_burst = 1000 (1000)

17/04 08:48:46.471575 (nil):mosConf_Trace:00450: Conf: eventlog_dump_events = 1 (1)

17/04 08:48:46.471576 (nil):mosConf_Trace:00434: Conf: eventlog_dump_events_wait_milli = 50 (50)

17/04 08:48:46.471577 (nil):mosConf_Trace:00450: Conf: eventlog_rotate_with_python = 1 (1)

17/04 08:48:46.471579 (nil):mosConf_Trace:00450: Conf: security_enable = 0 (0)

17/04 08:48:46.471580 (nil):mosConf_Trace:00450: Conf: security_asynchronous_load = 0 (0)

17/04 08:48:46.471581 (nil):mosConf_Trace:00450: Conf: security_allow_unsecure = 0 (0)

17/04 08:48:46.471582 (nil):mosConf_Trace:00450: Conf: ssl_enable = 0 (0)

17/04 08:48:46.471583 (nil):mosConf_Trace:00450: Conf: ssl_disable_client_auth = 0 (0)

17/04 08:48:46.471584 (nil):mosConf_Trace:00467: Conf: ssl_trust_cert_file =  ()

17/04 08:48:46.471586 (nil):mosConf_Trace:00467: Conf: ssl_key_pair_file =  ()

17/04 08:48:46.471587 (nil):mosConf_Trace:00417: Skipping printout of ssl_private_key_password

17/04 08:48:46.471589 (nil):mosConf_Trace:00467: Conf: ssl_cipher_list = ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!eNULL:!EXPORT:!MD5:!DSS:!DES:!RC4 (ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!eNULL:!EXPORT:!MD5:!DSS:!DES:!RC4)

17/04 08:48:46.471590 (nil):mosConf_Trace:00467: Conf: ssl_method = TLSv1.2 (TLSv1.2)

17/04 08:48:46.471591 (nil):mosConf_Trace:00434: Conf: ssl_context_id = 0 (0)

17/04 08:48:46.471593 (nil):mosConf_Trace:00434: Conf: ssl_cert_verify_depth = 4 (4)

17/04 08:48:46.471594 (nil):mosConf_Trace:00450: Conf: auth_enable = 0 (0)

17/04 08:48:46.471595 (nil):mosConf_Trace:00467: Conf: auth_key_pair_file =  ()

17/04 08:48:46.471596 (nil):mosConf_Trace:00417: Skipping printout of auth_private_key_password

17/04 08:48:46.471597 (nil):mosConf_Trace:00467: Conf: auth_trust_cert_file =  ()

17/04 08:48:46.471599 (nil):mosConf_Trace:00450: Conf: ldap_synchronous_load = 0 (0)

17/04 08:48:46.471600 (nil):mosConf_Trace:00434: Conf: ldap_timeout_sec = 5 (5)

17/04 08:48:46.471601 (nil):mosConf_Trace:00450: Conf: exp_file_enabled = 1 (1)

17/04 08:48:46.471602 (nil):mosConf_Trace:00467: Conf: exp_file_prefix = exp (exp)

17/04 08:48:46.471603 (nil):mosConf_Trace:00434: Conf: exp_file_history = 5 (5)

17/04 08:48:46.471605 (nil):mosConf_Trace:00434: Conf: exp_file_size = 1048576 (1048576)

17/04 08:48:46.471606 (nil):mosConf_Trace:00450: Conf: exp_dump_traces = 1 (1)

17/04 08:48:46.471607 (nil):mosConf_Trace:00434: Conf: exp_milli_wait_dump_traces = 50 (50)

17/04 08:48:46.471609 (nil):mosConf_Trace:00450: Conf: dbg_enable_core = 0 (0)

17/04 08:48:46.471610 (nil):mosConf_Trace:00450: Conf: dbg_enable_backtrace = 1 (1)

17/04 08:48:46.471611 (nil):mosConf_Trace:00434: Conf: dbg_core_type = 0 (0)

17/04 08:48:46.471612 (nil):mosConf_Trace:00450: Conf: dbg_planned_crash_panic = 0 (0)

17/04 08:48:46.471613 (nil):mosConf_Trace:00434: Conf: dbg_core_min_disk_space_mb = 500 (500)

17/04 08:48:46.471615 (nil):mosConf_Trace:00450: Conf: trc_file_enabled = 1 (1)

17/04 08:48:46.471616 (nil):mosConf_Trace:00450: Conf: err_file_enabled = 0 (0)

17/04 08:48:46.471617 (nil):mosConf_Trace:00467: Conf: err_file_prefix = err (err)

17/04 08:48:46.471618 (nil):mosConf_Trace:00450: Conf: trc_stdout = 0 (0)

17/04 08:48:46.471620 (nil):mosConf_Trace:00450: Conf: trc_file_async = 1 (1)

17/04 08:48:46.471621 (nil):mosConf_Trace:00434: Conf: trc_used_buf_percent_dump = 10 (10)

17/04 08:48:46.471622 (nil):mosConf_Trace:00434: Conf: trc_async_buf_size = 5242880 (5242880)

17/04 08:48:46.471624 (nil):mosConf_Trace:00434: Conf: trc_async_thread_sleep_millisecond = 1000 (1000)

17/04 08:48:46.471625 (nil):mosConf_Trace:00467: Conf: trc_file_prefix = trc (trc)

17/04 08:48:46.471626 (nil):mosConf_Trace:00467: Conf: trc_frequency_str_map =  ()

17/04 08:48:46.471627 (nil):mosConf_Trace:00450: Conf: trc_frequency_allow_modify = 1 (1)

17/04 08:48:46.471628 (nil):mosConf_Trace:00434: Conf: trc_file_history = 5 (5)

17/04 08:48:46.471630 (nil):mosConf_Trace:00434: Conf: trc_file_size = 52428800 (52428800)

17/04 08:48:46.471631 (nil):mosConf_Trace:00434: Conf: bwc_win_size_sec = 5 (5)

17/04 08:48:46.471632 (nil):mosConf_Trace:00450: Conf: bwc_enable = 1 (1)

17/04 08:48:46.471633 (nil):mosConf_Trace:00450: Conf: timer_use_threads = 0 (0)

17/04 08:48:46.471635 (nil):mosConf_Trace:00434: Conf: sdbg_port = 25620 (25620)

17/04 08:48:46.471636 (nil):mosConf_Trace:00434: Conf: instrumented_lock_period_log2 = 10 (10)

17/04 08:48:46.471637 (nil):mosConf_Trace:00434: Conf: instrumented_lock_max_contention = 100 (100)

17/04 08:48:46.471639 (nil):mosConf_Trace:00434: Conf: instrumented_lock_max_latency_cycles = 500000 (500000)

17/04 08:48:46.471640 (nil):mosConf_Trace:00434: Conf: numa_affinity = 0 (0)

17/04 08:48:46.471641 (nil):mosConf_Trace:00450: Conf: numa_memory_affinity = 1 (1)

17/04 08:48:46.471642 (nil):mosConf_Trace:00450: Conf: lock_use_spinlock = 0 (0)

17/04 08:48:46.471644 (nil):mosConf_Trace:00450: Conf: shm_server_on = 0 (0)

17/04 08:48:46.471645 (nil):mosConf_Trace:00434: Conf: shm_server_port = 0 (0)

17/04 08:48:46.471647 (nil):mosConf_Trace:00450: Conf: dbg_io_timeout_enable = 0 (0)

17/04 08:48:46.471648 (nil):mosConf_Trace:00434: Conf: dbg_io_timeout_millis = 15000 (15000)

17/04 08:48:46.471649 (nil):mosConf_Trace:00434: Conf: dbg_io_good_ios = 5000 (5000)

17/04 08:48:46.471650 (nil):mosConf_Trace:00450: Conf: config_oom_killer = 0 (0)

17/04 08:48:46.471651 (nil):mosConf_Trace:00450: Conf: use_fifo_file = 0 (0)

17/04 08:48:46.471653 (nil):mosConf_Trace:00467: Conf: fifo_filename = ./.mos.fifo (./.mos.fifo)

17/04 08:48:46.471654 (nil):mosConf_Trace:00450: Conf: print_conf = 1 (1)

17/04 08:48:46.471655 (nil):mosConf_Trace:00450: Conf: print_detailed_conf = 1 (1)

17/04 08:48:46.471656 (nil):mosConf_Trace:00450: Conf: umt_use_real_thread_ticks = 0 (0)

17/04 08:48:46.471657 (nil):mosConf_Trace:00434: Conf: max_callback_time_usec = 1000 (1000)

17/04 08:48:46.471658 (nil):mosConf_Trace:00434: Conf: poll_loop_time_factor = 0 (0)

17/04 08:48:46.471659 (nil):mosConf_Trace:00434: Conf: poll_loop_max_overshoot_factor = 0 (0)

17/04 08:48:46.471661 (nil):mosConf_Trace:00450: Conf: intr_violation_crash = 0 (0)

17/04 08:48:46.471662 (nil):mosConf_Trace:00450: Conf: netsmalliov_use_mem_pool = 1 (1)

17/04 08:48:46.471663 (nil):mosConf_Trace:00434: Conf: netsmalliov_mem_pool_size = 30000 (30000)

17/04 08:48:46.471664 (nil):mosConf_Trace:00434: Conf: debug_double_exp_wait_sec = 1 (1)

17/04 08:48:46.471665 (nil):mosConf_Trace:00467: Conf: cfg_dir_path = /opt/emc/scaleio/mdm/bin/../logs/../cfg/ ()

17/04 08:48:46.471666 (nil):mosConf_Trace:00467: Conf: bin_dir_path = /opt/emc/scaleio/mdm/bin/../logs/../bin/ ()

17/04 08:48:46.471668 (nil):mosConf_Trace:00467: Conf: rep_dir_path = /opt/emc/scaleio/mdm/bin/../logs/../rep/ ()

17/04 08:48:46.471669 (nil):mosConf_Trace:00467: Conf: diag_dir_path = /opt/emc/scaleio/mdm/bin/../logs/../diag/ ()

17/04 08:48:46.471670 (nil):mosConf_Trace:00450: Conf: hw_aware_enable_lsi_gen_info = 0 (0)

17/04 08:48:46.471671 (nil):mosConf_Trace:00450: Conf: hw_aware_enable_hp_gen_info = 1 (1)

17/04 08:48:46.471672 (nil):mosConf_Trace:00450: Conf: hw_aware_enable_dell_gen_info = 0 (0)

17/04 08:48:46.471673 (nil):mosConf_Trace:00450: Conf: hw_aware_enable_lsi_vdisk_info = 1 (1)

17/04 08:48:46.471674 (nil):mosConf_Trace:00450: Conf: hw_aware_enable_hp_vdisk_info = 1 (1)

17/04 08:48:46.471676 (nil):mosConf_Trace:00450: Conf: hw_aware_enable_dell_vdisk_info = 1 (1)

17/04 08:48:46.471681 (nil):mosUmtSchedThrd_SetDefaultSchedGuardValues:01231: Default Idle thread threshold set to 0 millis, panic ratio set to 3

17/04 08:48:46.471720 (nil):schedThrdGuard_SampleLivnes:01396: Sched guard thread is now live

17/04 08:48:46.471742 (nil):mosUmtSchedThrd_SetDefaultSchedGuardValues:01231: Default Idle thread threshold set to 0 millis, panic ratio set to 3

17/04 08:48:46.471856 (nil):mosUmtSchedThrd_SetLivnessThreshold:01260: Idle thread threshold for sched 0xbf9840 is  set to 0 ticks 0 millis

17/04 08:48:46.471872 (nil):mosMemPool_CreateIntrn:00609: Allocate 58 MB, for memory pool MDM-UMT-Allocator

17/04 08:48:46.471925 (nil):mosUmtSchedThrd_UpdateOngoingOps:00607: Now guarding 4 threads.

17/04 08:48:46.505419 0x7f0c54605eb8:mosMemPool_CreateIntrn:00609: Allocate 1 MB, for memory pool No name

17/04 08:48:46.506196 0x7f0c54605eb8:mosMemPool_CreateIntrn:00609: Allocate 1 MB, for memory pool No name

17/04 08:48:46.506303 (nil):mosEventLog_MainRotateThrd:00170: Event Log Rotation: Started using '"/opt/emc/scaleio/mdm/bin/../logs/../bin//eventLogger.py" --conf_dir=/opt/emc/scaleio/mdm/bin/../logs/../cfg/ --event_db_file_name=eventlog.db' command

17/04 08:48:46.506587 0x7f0c54605eb8:mosMemPool_CreateIntrn:00609: Allocate 1 MB, for memory pool No name

17/04 08:48:46.507325 0x7f0c54605eb8:mosMemPool_CreateIntrn:00609: Allocate 1 MB, for memory pool No name

17/04 08:48:46.508346 0x7f0c54605eb8:mosRSyslog_Init:01418: The module is now initialized with 1024 sized queue, 5000 millis reconnect time-out and 4 writers

17/04 08:48:46.508373 0x7f0c54605eb8:mosEventLog_PostInternal:00590: New event added. Message: "Initialized the remote syslog module". Additional info: "" Severity: Info

17/04 08:48:46.508389 0x7f0c54605eb8:mosEventLog_PostInternal:00590: New event added. Message: "MDM started with the role of Manager". Additional info: "" Severity: Info

17/04 08:48:46.508394 0x7f0c54605eb8:mdm_init:00097: Phase 0: actor create

17/04 08:48:46.508446 0x7f0c54605eb8:mosUmtSchedThrd_SetLivnessThreshold:01260: Idle thread threshold for sched 0xbceb40 is  set to 0 ticks 0 millis

17/04 08:48:46.508478 (nil):mosUmtSchedThrd_UpdateOngoingOps:00607: Now guarding 1 threads.

17/04 08:48:46.508524 0x7f0c54605eb8:actor_CtrlLock:14971: (actor_Create) Locking

17/04 08:48:46.508586 0x7f0c54605eb8:mosIO_open:00203: WARNING: Failed to get mount status for /opt/emc/scaleio/mdm/rep/actor_rep.bin (NOT_FOUND)

17/04 08:48:46.508592 0x7f0c54605eb8:mosAsyncIO_OpenFileEx:00309: WARNING: Failed to open IO file /opt/emc/scaleio/mdm/rep/actor_rep.bin with rc 3

17/04 08:48:46.508595 0x7f0c54605eb8:actor_Reconstruct:03475: Unable to open file (Size=0), rc: NOT_FOUND. Regenerate

17/04 08:48:46.508598 0x7f0c54605eb8:actor_Reconstruct:03569: Regenerate shared memory file. Starting with a clean configuration.

17/04 08:48:46.508655 0x7f0c54605eb8:mosIO_close:00115: Closing FD(10)

17/04 08:48:46.510006 0x7f0c54605eb8:actor_Create:02652: After reconstruct bDegraded [0,0,0,0] bDirty 0 bGiveOwnership 0  degraded gen 0 clsUniqueID 0000000000000000 actor gen 0 netObjID 0000000000000000 actorID 0000000000000000 successorID 0000000000000000 bEnableClientSecureCommunication 1

17/04 08:48:46.510022 0x7f0c54605eb8:net_SetNetIdentifier:00334: Network identification set with time stamp 35E4644 and magic number 21C4C00D

17/04 08:48:46.510038 0x7f0c54605eb8:mosMemPool_CreateIntrn:00609: Allocate 3 MB, for memory pool Network - Receive messages

17/04 08:48:46.511363 0x7f0c54605eb8:netPathMgr_SetKaSndIdleThreshInMIlli:00328: Updated to 6000 Ms

17/04 08:48:46.511369 0x7f0c54605eb8:netPathMgr_SetKaRcvParams:00387: Will send every 100 Ms, idle after 300 Ms

17/04 08:48:46.511371 0x7f0c54605eb8:netPathMgr_Init:00209: Path MGR 0x7f0c58004750 starting. KA-RCV-idle 300 Ms. KA-SEND-idle 6000 Ms KA-SEND-freq 100 Ms

17/04 08:48:46.511782 0x7f0c54605eb8:mosSecurity_PrintConf:00084: The security configuration is:

State: Enabled

Accept Unsecured Connections: Enabled

Asynchronous Load: Disabled

17/04 08:48:46.514195 0x7f0c54605eb8:mosSsl_ApplyConf:01046: SSL is disabled

17/04 08:48:46.514201 0x7f0c54605eb8:mosSsl_Init:01133: SSL initialized successfully

17/04 08:48:46.514257 0x7f0c54605eb8:mosAuth_ApplyConf:00430: Auth is disabled

17/04 08:48:46.514262 0x7f0c54605eb8:mosAuth_Init:00552: Authentication initialized successfully

17/04 08:48:46.514265 0x7f0c54605eb8:mosSecurity_Init:00167: MOS Security library initialized successfully

17/04 08:48:46.514277 0x7f0c54605eb8:actor_EnableSecurity:18656: Generating a new certificate file

17/04 08:48:46.822218 0x7f0c54605eb8:mosSsl_PrintConf:00336: The SSL configuration is:

State: Enabled

Context ID: 0

Trusted Cert File:

Key Pair File: /opt/emc/scaleio/mdm/bin/../logs/../cfg//mdm_management_certificate.pem

Method: TLSv1.2 (2)

Client Authentication: Disabled

Ciphers: ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!eNULL:!EXPORT:!MD5:!DSS:!DES:!RC4

Cert verification depth: 4

17/04 08:48:46.825435 0x7f0c54605eb8:replFile_Create:00166:

17/04 08:48:46.825501 0x7f0c54605eb8:replFile_CreateFile:00106: mosAsyncIO_OpenFile succeeded on 1 try. FileName='/opt/emc/scaleio/mdm/rep/mdm_rep.bin'.

17/04 08:48:46.825506 0x7f0c54605eb8:replFile_Recover:00205:

17/04 08:48:46.825520 0x7f0c54605eb8:mosShmUmt_Create:00083: shm_open failed, name scaleio_mdm_rep_shm, errno 2, size 0

17/04 08:48:46.825523 0x7f0c54605eb8:rep_Recover:01564: Could not map to shared memory scaleio_mdm_rep_shm, recovering from disk

17/04 08:48:51.411870 0x7f0c54605eb8:rep_RecoverShm:01378: finished with rc: SUCCESS

17/04 08:48:51.411974 0x7f0c54605eb8:replFile_Recover:00251: Done rc: SUCCESS

17/04 08:48:51.411991 0x7f0c54605eb8:syncerSlave_CreateLocal:00739: SyncerSlave create

17/04 08:48:51.412123 0x7f0c54605eb8:syncer_CreateLocal:01882: Syncer create

17/04 08:48:51.412175 0x7f0c54605eb8:syncer_CreateLocal:01882: Syncer create

17/04 08:48:51.412595 0x7f0c54605eb8:syncer_CreateLocal:01882: Syncer create

17/04 08:48:51.412631 0x7f0c54605eb8:syncer_CreateLocal:01882: Syncer create

17/04 08:48:51.412662 0x7f0c54605eb8:replFile_RecoverTypes:00279:

17/04 08:48:51.414835 0x7f0c54605eb8:replFile_RecoverTypes:00286: Shared mem recover error. RC: CLUSTER_DEGRADED

17/04 08:48:51.414890 0x7f0c54605eb8:mosIO_open:00203: WARNING: Failed to get mount status for /opt/emc/scaleio/mdm/rep/actor_local_voter_rep.bin (NOT_FOUND)

17/04 08:48:51.414893 0x7f0c54605eb8:mosAsyncIO_OpenFileEx:00309: WARNING: Failed to open IO file /opt/emc/scaleio/mdm/rep/actor_local_voter_rep.bin with rc 3

17/04 08:48:51.414895 0x7f0c54605eb8:voter_Reconstruct:02810: Unable to open file. regenerate

17/04 08:48:51.414946 0x7f0c54605eb8:mosIO_close:00115: Closing FD(16)

17/04 08:48:51.522414 0x7f0c54605eb8:voter_Reconstruct:02951: Regenerate.  State: NOT_CONFIGURED ActorID: 0 ActorGen: 0 DegradedGen: 0 oosActors#: 0

17/04 08:48:51.522509 (nil):mosUmtSchedThrd_UpdateOngoingOps:00607: Now guarding 1 threads.

17/04 08:48:51.522515 0x7f0c54605eb8:mosUmtSchedThrd_SetLivnessThreshold:01260: Idle thread threshold for sched 0x7f0c580043d0 is  set to 0 ticks 0 millis

17/04 08:48:51.522518 0x7f0c54605eb8:mosUmtSchedThrd_SetLivnessThreshold:01260: Idle thread threshold for sched 0x7f0c580043d0 is  set to 0 ticks 0 millis

17/04 08:48:51.522649 0x7f0c54605eb8:actor_Create:02867: #voters updated to 0, saved value 0

17/04 08:48:51.522655 0x7f0c54605eb8:actorVoter_CreateLocal:08763: #voters 0

17/04 08:48:51.522658 0x7f0c54605eb8:actorVoter_CreateLocal:08771: #voters updated to 1

17/04 08:48:51.522666 0x7f0c54605eb8:actorVoter_CreateLocal:08817: ID: 60be69f17e55cae0 No resources

17/04 08:48:51.522668 0x7f0c54605eb8:actorVoter_CreateLocal:08822: #voters 1

17/04 08:48:51.522670 0x7f0c54605eb8:mosUmtSchedThrd_SetLivnessThreshold:01260: Idle thread threshold for sched 0xbceb40 is  set to 20 ticks 200 millis

17/04 08:48:51.619816 0x7f0c54605eb8:actor_Create:02905: actorID: 0 clusterState: Invalid actorState: INVALID actorGen: 0 switchOwnership: 0 successorID:0000000000000000 otherActorIDs: [0000000000000000,0000000000000000]

17/04 08:48:51.619826 0x7f0c54605eb8:actor_DumpToTraces:13606: (actor_Create:2920) actorID: 0 netObjID: 0 clsUniqueID: 0 actorGen: 0 ClusterMode: Invalid ActorState: INVALID bCede: 0 bActorHasQuorum: 0 bHandlingTrigger: 0 bRemoteActorOwned: [0,0] bLocalVoterOwnedByOtherActor: 0 numRemoteActors: 0  numVoters: 1  Virtual IPs: []

17/04 08:48:51.619831 0x7f0c54605eb8:actor_DumpToTraces:13622: Membership change state: STABLE newClusterMode: Invalid newNodeSet: [0000000000000000] bGiveOwnership: 0 successorActorID: 0 tbVoterID: 0000000000000000 remoteVoterID: 0000000000000000

17/04 08:48:51.619833 0x7f0c54605eb8:actorVoter_DumpToTraces:13667:  voterID: 60be69f17e55cae0 netObjID: 0 realIndex: 0 realRemoteActorIndex: 2147483647 type: LOCAL hasLease: 0 lastRspVoteState: UNKNOWN lastRspActorID: 0 lastRspRC: ILLEGAL

17/04 08:48:51.619837 0x7f0c54605eb8:actor_DumpToTraces:13606: (actor_Create:3067) actorID: 0 netObjID: 0 clsUniqueID: 0 actorGen: 0 ClusterMode: Invalid ActorState: INVALID bCede: 0 bActorHasQuorum: 0 bHandlingTrigger: 0 bRemoteActorOwned: [0,0] bLocalVoterOwnedByOtherActor: 0 numRemoteActors: 0  numVoters: 1  Virtual IPs: []

17/04 08:48:51.619840 0x7f0c54605eb8:actor_DumpToTraces:13622: Membership change state: STABLE newClusterMode: Invalid newNodeSet: [0000000000000000] bGiveOwnership: 0 successorActorID: 0 tbVoterID: 0000000000000000 remoteVoterID: 0000000000000000

17/04 08:48:51.619842 0x7f0c54605eb8:actorVoter_DumpToTraces:13667:  voterID: 60be69f17e55cae0 netObjID: 0 realIndex: 0 realRemoteActorIndex: 2147483647 type: LOCAL hasLease: 0 lastRspVoteState: UNKNOWN lastRspActorID: 0 lastRspRC: ILLEGAL

17/04 08:48:51.619844 0x7f0c54605eb8:actor_CtrlUnlock:15016: Unlocking

17/04 08:48:54.260124 (nil):mosEventLog_InitDB:00355: After Eventlog DB initialized, sqlite uses 85.117188 KB

Thanks in Advance!!!

Suriya

73 Posts

April 17th, 2017 14:00

Suriya,

You answered your own question:

i can able to ping 10.20.70.14 to 15 is reachable, But i can't able to ping 10.20.70.13 to 14 is not reachable.

No need to look through the trace logs. If you can't ping from the master to the other MDMs that you want to add as slaves, there is no way that it will work. You need basic fundamental IP communications first. Fix the ping issues first, then retry your operations.

54 Posts

April 18th, 2017 20:00

Thanks for your great support I have checked on esxi end completed the installation

54 Posts

April 19th, 2017 08:00

I have a small query scaleio configured without errors but it's not showing esxi host for mapping it's searing for l9ng time

54 Posts

April 19th, 2017 11:00

How to manually install sdc in esxi

Thanks in advance

Suriya

No Events found!

Top