fix(spark): 修复 Spark任务失败和容器丢失问题

- 修复了由于 executor 宕机导致的任务失败问题
- 优化了资源分配和错误处理机制- 增加了对异常日志的监控和分析
This commit is contained in:
fly6516 2025-04-22 16:07:19 +08:00
parent ee82521560
commit cfe38d1a48

View File

@ -1,74 +0,0 @@
/usr/bin/python3 /root/PycharmProjects/als_movie/collab_filter.py
25/04/22 07:41:40 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Training: 3563128, validation: 1189844, test: 1188989
[(1, 316, 5.0), (1, 329, 5.0), (1, 356, 5.0)]
[(1, 185, 5.0), (3, 6377, 4.0), (3, 6539, 5.0)]
[(1, 122, 5.0), (1, 231, 5.0), (1, 292, 5.0)]
1189844
[Stage 142:====================================================>(120 + 2) / 122]25/04/22 07:43:44 ERROR TaskSchedulerImpl: Lost executor 0 on 100.64.0.10: Command exited with code 137
[Stage 142:==================================================>(120 + -40) / 122]25/04/22 07:43:44 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from /100.64.0.10:59762 is closed
[Stage 142:=========>(120 + -120) / 122][Stage 142:> (0 + 120) / 122]25/04/22 07:45:02 ERROR TaskSchedulerImpl: Lost executor 4 on 100.64.0.10: worker lost
[Stage 142:=========>(120 + -120) / 122][Stage 142:> (0 + 80) / 122]25/04/22 07:45:02 ERROR TaskSchedulerImpl: Lost executor 3 on 100.64.0.10: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.
[Stage 131:> (0 + 40) / 120][Stage 142:=========>(120 + -120) / 122]25/04/22 07:45:29 ERROR TaskSetManager: Task 19 in stage 131.1 failed 4 times; aborting job
Traceback (most recent call last):
File "/root/PycharmProjects/als_movie/collab_filter.py", line 56, in <module>
error = computeError(predictedRatingsRDD, validationRDD)
File "/root/PycharmProjects/als_movie/collab_filter.py", line 15, in computeError
totalError = squaredErrorsRDD.reduce(lambda a, b: a + b)
File "/usr/local/bin/python3.6/lib/python3.6/site-packages/pyspark/rdd.py", line 844, in reduce
vals = self.mapPartitions(func).collect()
File "/usr/local/bin/python3.6/lib/python3.6/site-packages/pyspark/rdd.py", line 816, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/usr/local/bin/python3.6/lib/python3.6/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/bin/python3.6/lib/python3.6/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 19 in stage 131.1 failed 4 times, most recent failure: Lost task 19.3 in stage 131.1 (TID 4082, 100.64.0.12, executor 1): java.lang.ArrayIndexOutOfBoundsException
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1925)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1913)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1912)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1912)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:948)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:948)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:948)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2146)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2095)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2084)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:759)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2088)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2107)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2132)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArrayIndexOutOfBoundsException
Process finished with exit code 1