Volume 44 Issue 1
Feb.  2024
Turn off MathJax
Article Contents
LI Yingyu, SHI Haoying, ZHAO Tong. On-orbit Distributed Negotiation Intelligent Mission Planning for Instant Response (in Chinese). Chinese Journal of Space Science, 2024, 44(1): 159-168 doi: 10.11728/cjss2024.01.2022-0074
Citation: LI Yingyu, SHI Haoying, ZHAO Tong. On-orbit Distributed Negotiation Intelligent Mission Planning for Instant Response (in Chinese). Chinese Journal of Space Science, 2024, 44(1): 159-168 doi: 10.11728/cjss2024.01.2022-0074

On-orbit Distributed Negotiation Intelligent Mission Planning for Instant Response

doi: 10.11728/cjss2024.01.2022-0074 cstr: 32142.14.cjss2024.01.2022-0074
  • Received Date: 2022-12-26
  • Rev Recd Date: 2023-02-23
  • Available Online: 2023-09-01
  • The mission planning of LEO remote sensing constellation is a complex multi-objective optimization problem. At present, there are some problems in satellite mission planning research based on deep reinforcement learning, such as small scale of test data constellation, single optimization objective, repeated task arrangement and poor model adaptability. To solve the above problems, the CON_DQN (Contract network and Deep Q Network, CON_DQN) algorithm is proposed in this paper, which adopts the master-slave on-orbit distributed negotiation mechanism, the slave satellite makes decisions based on the planning, and the master satellite makes multi-objective optimization decisions from the aspects of priority, resource cost and load balancing based on the deep reinforcement learning algorithm, and processes on-orbit distributed negotiation intelligent mission planning for instant response. Aiming at the scene where the user demand reaches the key observation area dynamically at high frequency, the simulation experiment of different scale task sets of 100-star constellation is carried out. The results show that the proposed algorithm has a fast response speed and can achieve higher task benefits.

     

  • loading
  • [1]
    王俊, 夏维, 胡笑旋, 等. 基于多Agent的遥感星座自主协同任务规划[J]. 指挥与控制学报, 2021, 7(3): 287-294 doi: 10.3969/j.issn.2096-0204.2021.03.0287

    WANG Jun, XIA Wei, HU Xiaoxuan, et al. Autonomous cooperative mission planning for remote sensing constellation based on multi-Agent[J]. Journal of Command and Control, 2021, 7(3): 287-294 doi: 10.3969/j.issn.2096-0204.2021.03.0287
    [2]
    XIANG M S, DENG Q C, DUAN L S, et al. Dynamic monitoring and analysis of the earthquake worst-hit area based on remote sensing[J]. Alexandria Engineering Journal, 2022, 61(11): 8691-8702 doi: 10.1016/j.aej.2022.02.001
    [3]
    章罗娜, 马忠成, 饶建兵, 等. 低轨卫星互联网发展趋势及市场展望[J]. 国际太空, 2020(11): 28-31 doi: 10.3969/j.issn.1009-2366.2020.11.006

    ZHANG Luona, MA Zhongcheng, RAO Jianbing, et al. Development trend and market prospect of low-orbit satellite Internet[J]. International Space, 2020(11): 28-31 doi: 10.3969/j.issn.1009-2366.2020.11.006
    [4]
    SINHA P K, DUTTA A. Multi-satellite task allocation algorithm for earth observation[C]//2016 IEEE Region 10 Conference. Singapore: IEEE, 2016: 403-408
    [5]
    王冲, 景宁, 李军, 等. 一种基于多Agent强化学习的多星协同任务规划算法[J]. 国防科技大学学报, 2011, 33(1): 53-58 doi: 10.3969/j.issn.1001-2486.2011.01.012

    WANG Chong, JING Ning, LI Jun, et al. An algorithm of cooperative multiple satellites mission planning based on multi-agent reinforcement learning[J]. Journal of National University of Defense Technology, 2011, 33(1): 53-58 doi: 10.3969/j.issn.1001-2486.2011.01.012
    [6]
    HUANG H, SUN C Y, HU J X, et al. Optimization design of response satellite deployment for regional target emergency observation[C]//Proceedings of 2020 International Conference on Guidance on Advances in Guidance, Navigation and Control. Tianjin: Springer, 2022: 579-591
    [7]
    SUTTON R S, BARTO A G. Reinforcement Learning: An Introduction[M]. Cambridge: MIT Press, 1998
    [8]
    马一凡, 赵凡宇, 王鑫, 等. 密集观测场景下的敏捷成像卫星任务规划方法[J]. 浙江大学学报(工学版), 2021, 55(6): 1215-1224 doi: 10.3785/j.issn.1008-973X.2021.06.023

    MA Yifan, ZHAO Fanyu, WANG Xin, et al. Agile imaging satellite task planning method for intensive observation[J]. Journal of Zhejiang University (Engineering Science), 2021, 55(6): 1215-1224 doi: 10.3785/j.issn.1008-973X.2021.06.023
    [9]
    周碧莹, 王爱平, 费长江, 等. 基于强化学习的卫星网络资源调度机制[J]. 计算机工程与科学, 2019, 41(12): 2134-2142 doi: 10.3969/j.issn.1007-130X.2019.12.006

    ZHOU Biying, WANG Aiping, FEI Changjiang, et al. A satellite network resource scheduling mechanism based on reinforcement learning[J]. Computer Engineering and Science, 2019, 41(12): 2134-2142 doi: 10.3969/j.issn.1007-130X.2019.12.006
    [10]
    彭双, 伍江江, 陈浩, 等. 基于注意力神经网络的对地观测卫星星上自主任务规划方法[J]. 计算机科学, 2022, 49(7): 242-247 doi: 10.11896/jsjkx.210500093

    PENG Shuang, WU Jiangjiang, CHEN Hao, et al. Satellite onboard observation task planning based on attention neural network[J]. Computer Science, 2022, 49(7): 242-247 doi: 10.11896/jsjkx.210500093
    [11]
    王海蛟. 基于强化学习的卫星规模化在线调度方法研究[D]. 北京: 中国科学院大学(中国科学院国家空间科学中心), 2018

    WANG Haijiao. Massive Scheduling Method Under Online Situation for Satellites Based on Reinforcement Learning[D]. Beijing: University of Chinese Academy of Sciences (National Space Science Center, Chinese Academy of Sciences), 2018
    [12]
    李大林. 天文观测卫星任务规划模型与方法研究[D]. 哈尔滨: 哈尔滨工业大学, 2021

    LI Dalin. Research on Observation Scheduling Model and Method of Astronomy Satellite[D]. Harbin: Harbin Institute of Technology, 2021
    [13]
    LIU Y C, CHEN Q F, LI C Y, et al. Mission planning for earth observation satellite with competitive learning strategy[J]. Aerospace Science and Technology, 2021, 118: 107047 doi: 10.1016/j.ast.2021.107047
    [14]
    吴白轩. 基于合同网协议的多星多任务规划方法研究[D]. 哈尔滨: 哈尔滨工程大学, 2020

    WU Baixuan. Research on Multi-star and Multitask Planning Based on Contract Network Protocol[D]. Harbin: Harbin Engineering University, 2020
    [15]
    姜维, 庞秀丽. 组网成像卫星协同任务规划方法[M]. 哈尔滨: 哈尔滨工业大学出版社, 2016

    JIANG Wei, PANG Xiuli. Collaborative Mission Planning Method for Networking Imaging Satellite[M]. Harbin: Harbin Institute of Technology Press, 2016
    [16]
    MNIH V, KAVUKCUOGLU K, SILVER D, et al. Playing Atari with deep reinforcement learning[OL]. arXiv preprint arXiv: 1312.5602, 2013
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)

    Article Metrics

    Article Views(1171) PDF Downloads(138) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return