Skip to main content

SCIS IT Resources

p,li,td{ text-align:justify; font-size:14px_!important; line-height:_19px; margin-bottom:_10px_!important; } Resources_&_Support_Service

The_Services_provided_by_the_School_can_be_summarized_as_follows:

  1. Computational_Genomics
  2. Routine_sequence_analysis_support_software_is_provided_to_the_user_community_by_the_scientific_staff.
  3. National_Facility_for_molecular_modeling_(Northern_Region)._Used_for_training_during_workshops_and_research_by groups_on_campus
  4. Mirror_Biological_data_for_use_on_BioGRID.
  5. High-performance_computational_facilities_along_with_university_facility.
 

SC&IS_:_High_Performance_Computing_Facility_(HPCF)

The_University_High_Performance_Computing_Facility_at_Room_No.31,_in_the_SC&IS,_new_building,_was_funded_from_the_DST_Purse_&_UGC_UPE-II_program,_and_envisaged_as_an_important_tool_to_enable_us_to_raise_the_level_of_our_research_to_remain_academically_competitive,_especially_for_research_problems_which_involved_large_data_sets,_and_numerical_calculations._The_facility_has_recently_been_upgraded_with_a_220_Compute_Cores,_128_SMP_Cores,_2880_GPU_Cores_(1_x_Tesla_K40)_Cluster._Apart_from_this_cluster_we_also_have_160_Compute_Cores_,_48_SMP_Cores_&_approx._5000_GPU_Cores_(2_x_Tesla_K20)_Cluster.

The_HPCF_Centre_is_conceived_of_as_a_functionally_distributed_supercomputing_environment,_housing_leading-edge_computing_systems,_with_sophisticated_software_packages,_and_connected_by_a_powerful_high-speed_fibre-optic_network._The_computing_facilities_are_connected_to_the_campus_LAN,_WLAN_and_also_to_the_Internet.

The_technology_which_is_used_to_build_this_cluster_is_xCAT_and_the_scheduler_used_is_PBS._By_using_this_scheduler_we_can_manage_the_user_applications_and_we_can_implement_the_policies._It_is_a_very_powerful_tool._In_order_to_provide_the_low_latency_we_have_used_the_separate_switches_for_MPI(Infiniband),_Storage(10G)_and_IPMI(1_Gig_Ethernet)._In_this_cluster_we_have_attached_the_storage_with_127_TB_capacity.

 

 

 Brief_Architectural_information:

 

  • New_Cluster_(2016)
  • Processor_:_Intel_Xeon_E5_2630
  • NO._of_Master_Nodes_:_1
  • NO._of_Computing_Nodes_:_11
  • NO._of_SMP_Nodes_:_2
  • NO._of_Hybrid_(CPU-GPU)_Nodes_:_1
  • CLUSTER_Software_:_xCAT
  • SERVER_ Model_:_SUPERMICRO_/_TYRONE
  • NAS_Appliance_Model_:_TYRONE
  • Total_Peak_Performance_:_7.74_TF

 

Brief_Architectural_information:

for_Boston_Cluster_(2013)

  • Processor_:_AMD_Opteron_6300_series
  • NO._of_Master_Nodes_:_1
  • NO._of_Computing_Nodes_:_3
  • NO._of_SMP_Nodes_:_1
  • NO._of_Hybrid_(CPU-GPU)_Nodes_:_1
  • CLUSTER_Software_:_ROCKS_6.x
  • SERVER_ Model_:_BOSTON
  • NAS_Appliance_Model_:_BOSTON_Super_Server
  • Total_Peak_Performance_:_1.3_TF

 

Calculation_procedure_for_peak_performance:

 

  • No_of_nodes_11
  • Memory_RAM_128_GB
  • Hard_Disk_Capacity/each_node_:_1_TB
  • Storage_Cap._127_TB
  • No_.of_processors_and_cores:_2_X_10_=_20_(10_core_+_dual_socket)
  • CPU_speed_:_2.2_GHz
  • No._of_floating_point_operations_per_seconds_for_INTEL_processor:_10_(since_it_is_a_dual_core)
  • Total_peak_performance_:_No_of_nodes_X_No_.of_processors_and_cores_X_Cpu_speed_X_No_of_floating_point_operations_per_second_=_11_X_20_X_2.2GHz_X_16_=_7.74_TF

 

Calculation_procedure_for_peak_performance:

 

  • No_of_nodes_3
  • Memory_RAM_48_GB
  • Hard_Disk_Capacity/each_node_:_500_GB
  • Storage_Cap._50_TB_(formatted)
  • No._of_processors_and_cores:_2_X_16_=_32_(16_core_+_dual_socket)
  • CPU_speed_:_2.3_GHz

Softwares_used_in_UPOE_cluster:

  • Ganglia_:_monitoring_tool
  • MPI_:_parallel_Processing
  • HPC_software_:_High_performance_LINPAC_(performance_testing_tool_)
  • Software's_used_in_HPC_cluster
  • R,_Amber_+_Q.C_tools,_GRID,_GOLPE,_MATLAB,_GNUPLOT,_OPENEYE,_ADF,_AUTODOCK,_GROMACS_etc.,
  • Schedu

 

Softwares_used_in_UPOE_cluster:

 

  • Ganglia_:_monitoring_tool
  • MPI_:_parallel_Processing
  • HPC_software_:_High_performance_LINPAC_(performance_testing_tool_)
  • Software's_used_in_HPC_cluster
  • R,_Amber_+_Q.C_tools,_GRID,_GOLPE,_MATLAB,_GNUPLOT,_OPENEYE,_ADF,_AUTODOCK,_GROMACS_etc.,

Scheduler_used:

PBS:_Job_scheduler_software_tool,_in_this_we_have_already_implemented_the_fair_share_policy_i.e._all_users_can_get_equal_priority,_using_this_we_can_submit_batch_and_parallel_jobs

 

Application_softwares_and_compilers:

 

  • Open_MPI 
  • C,_C++,_FORTRAN_compilers