Module: Syskit::Runtime
- Extended by:
- Logger::Hierarchy
- Defined in:
- lib/syskit/runtime.rb,
lib/syskit/runtime/update_task_states.rb,
lib/syskit/runtime/connection_management.rb,
lib/syskit/runtime/update_deployment_states.rb,
lib/syskit/runtime/apply_requirement_modifications.rb
Overview
Namespace containing all the system management at runtime (propagation of states, triggering of connection updates, …)
Defined Under Namespace
Modules: PlanExtension Classes: ConnectionManagement
Class Method Summary collapse
- .abort_process_server(plan, process_server) ⇒ Object
- .apply_requirement_modifications(plan, force: false) ⇒ Object
-
.update_deployment_states(plan) ⇒ Object
This method is called once at the beginning of each execution cycle to update the state of Deployment tasks w.r.t.
-
.update_task_states(plan) ⇒ Object
This method is called at the beginning of each execution cycle, and updates the running TaskContext tasks.
Class Method Details
.abort_process_server(plan, process_server) ⇒ Object
34 35 36 37 38 39 40 41 42 |
# File 'lib/syskit/runtime/update_deployment_states.rb', line 34 def self.abort_process_server(plan, process_server) client = process_server.client # Before we can terminate Syskit, we need to abort all # deployments that were managed by this client deployments = plan.find_tasks(Syskit::Deployment). find_all { |t| t.arguments[:on] == process_server.name } deployments.each { |t| t.aborted_event.emit if !t.pending? && !t.finished? } Syskit.conf.remove_process_server(process_server.name) end |
.apply_requirement_modifications(plan, force: false) ⇒ Object
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
# File 'lib/syskit/runtime/apply_requirement_modifications.rb', line 90 def self.apply_requirement_modifications(plan, force: false) if plan.syskit_has_async_resolution? # We're already running a resolution, make sure it is not # obsolete if force || !plan.syskit_valid_async_resolution? plan.syskit_cancel_async_resolution elsif plan.syskit_finished_async_resolution? running_requirement_tasks = plan.find_tasks(Syskit::InstanceRequirementsTask).running begin plan.syskit_apply_async_resolution_results rescue ::Exception => e running_requirement_tasks.each do |t| t.failed_event.emit(e) end return end running_requirement_tasks.each do |t| t.success_event.emit end return end end if !plan.syskit_has_async_resolution? if force || plan.find_tasks(Syskit::InstanceRequirementsTask).running.any? { true } requirement_tasks = NetworkGeneration::Engine.discover_requirement_tasks_from_plan(plan) if !requirement_tasks.empty? # We're not resolving anything, but new IR tasks have been # started. Deploy them plan.syskit_start_async_resolution(requirement_tasks) end end end end |
.update_deployment_states(plan) ⇒ Object
This method is called once at the beginning of each execution cycle to update the state of Deployment tasks w.r.t. the state of the underlying process
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
# File 'lib/syskit/runtime/update_deployment_states.rb', line 6 def self.update_deployment_states(plan) # We first announce all the dead processes and only then call # #cleanup_dead_connections, thus avoiding to disconnect connections # between already-dead processes all_dead_deployments = Set.new server_config = Syskit.conf.each_process_server_config.to_a server_config.each do |config| begin dead_deployments = config.client.wait_termination(0) rescue ::Exception => e deployments = abort_process_server(plan, config) all_dead_deployments.merge(deployments) plan.execution_engine.add_framework_error(e, "update_deployment_states") next end dead_deployments.each do |p, exit_status| d = Deployment.deployment_by_process(p) if !d.finishing? d.warn "#{p.name} unexpectedly died on process server #{config.name}" end all_dead_deployments << d d.dead!(exit_status) end end end |
.update_task_states(plan) ⇒ Object
This method is called at the beginning of each execution cycle, and updates the running TaskContext tasks.
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
# File 'lib/syskit/runtime/update_task_states.rb', line 5 def self.update_task_states(plan) # :nodoc: query = plan.find_tasks(Syskit::TaskContext).not_finished schedule = plan.execution_engine.scheduler.enabled? for t in query execution_agent = t.execution_agent # The task's deployment is not started yet if !t.orocos_task plan.execution_engine.scheduler.report_holdoff "did not configure, execution agent not started yet", t next elsif execution_agent.aborted_event.pending? next elsif !execution_agent raise NotImplementedError, "#{t} is not yet finished but has no execution agent. #{t}'s history is\n #{t.history.map(&:to_s).join("\n ")}" elsif !execution_agent.ready? raise InternalError, "orocos_task != nil on #{t}, but #{execution_agent} is not ready yet" end # Some CORBA implementations (namely, omniORB) may behave weird # if the remote process terminates in the middle of a remote # call. # # Ignore tasks whose process is terminating to reduce the # likelihood of that happening if execution_agent.finishing? || execution_agent.ready_to_die? next end if t.setting_up? plan.execution_engine.scheduler.report_holdoff "is being configured", t end if schedule && t.pending? && !t.setup? && !t.setting_up? next if !t.meets_configurationg_precedence_constraints? t.freeze_delayed_arguments if t.will_never_setup? if !t.kill_execution_agent_if_alone t.failed_to_start!( Roby::CommandFailed.new( InternalError.exception("#{t} reports that it cannot be configured (FATAL_ERROR ?)"), t.start_event)) next end elsif t.ready_for_setup? t.setup.execute next else plan.execution_engine.scheduler.report_holdoff "did not configure, not ready for setup", t end end next if !t.running? && !t.starting? || t.aborted_event.pending? handled_this_cycle = Array.new begin state = nil state_count = 0 while (!state || t.orocos_task.runtime_state?(state)) && (new_state = t.update_orogen_state) state_count += 1 # Returns nil if we have a communication problem. In this # case, #update_orogen_state will have emitted the right # events for us anyway if new_state && handled_this_cycle.last != new_state t.handle_state_changes handled_this_cycle << new_state end end if state_count >= Deployment::STATE_READER_BUFFER_SIZE Runtime.warn "got #{state_count} state updates for #{t}, we might have lost some state updates in the process" end rescue Orocos::CORBA::ComError => e t.aborted_event.emit e end end end |