“mdadm –grow” is stuck. How can I safely cancel the operation?

I had raid5 8TB * 4

then ,now I added one more 8TB drive and try to make raid5 grow

$mdadm --grow /dev/md128 --raid-disks=5

It comes to 6.1% but it stuck for a few days, and number 482870548 doesn’t move any more.

less /proc/mdstat

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md128 : active raid5 sdk[6] sdh3[0] sdi[4] sdj[5] sdg3[1]
      23427528384 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
      [=>...................]  reshape =  6.1% (482870548/7809176128) finish=63150741.3min speed=1K/sec

However it still consumes 100% CPU

$top

10070 root      20   0       0      0      0 R 100.0  0.0   3639:00 md128_raid5                                                                                                                                                                                                                    
    7 root      20   0       0      0      0 S   0.3  0.0   8:27.71 rcu_sched                                                                                                               
 1400 root      20   0       0      0      0 S   0.3  0.0  21:12.08 kswapd0                                                                                                                 
 2507 root      20   0       0      0      0 S   0.3  0.0  12:18.42 usb-storage                                                                                                             
 2561 root       0 -20       0      0      0 S   0.3  0.0   2:30.18 kworker/1:1H                                                                                                            
28016 root      20   0   28904   3428   2780 R   0.3  0.1   0:00.02 top                                                                                                                     
    1 root      20   0  138992   5400   3372 S   0.0  0.1   0:08.39 systemd                                                                                                                 
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.04 kthreadd                                                                                                                
    3 root      20   0       0      0      0 S   0.0  0.0  14:01.07 ksoftirqd/0     

I checked the situation with this

$mdadm -D /dev/md128

/dev/md128:
           Version : 1.2
     Creation Time : Tue Sep 22 08:59:59 2020
        Raid Level : raid5
        Array Size : 23427528384 (22342.23 GiB 23989.79 GB)
     Used Dev Size : 7809176128 (7447.41 GiB 7996.60 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Wed Jul 24 09:24:26 2024
             State : clean, reshaping 
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : unknown

    Reshape Status : 6% complete
     Delta Devices : 1, (4->5)

              Name : 0a438048:RaidSecond-0  (local to host 0a438048)
              UUID : a9bcec2a:5772e6a5:045f28ac:62ae0002
            Events : 53642

    Number   Major   Minor   RaidDevice State
       0       8      115        0      active sync   /dev/sdh3
       1       8       99        1      active sync   /dev/sdg3
       5       8      144        2      active sync   /dev/sdj
       4       8      128        3      active sync   /dev/sdi
       6       8      160        4      active sync   /dev/sdk

Also , I can’t find the reason why it stuck and I am not what is the safe way to escape this situation.

So, How can I stop the safely with this process?

Or is there any other place to check ?