fix: error.txt

This commit is contained in:
fly6516 2025-04-22 06:54:29 +00:00
parent 6c9f1149a4
commit 4d50e6cdc2

View File

@ -1,5 +1,5 @@
/usr/bin/python3 /root/PycharmProjects/als_movie/collab_filter.py
25/04/22 06:25:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
25/04/22 06:53:16 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
@ -9,7 +9,7 @@ Training: 3563128, validation: 1189844, test: 1188989
[(21708, 110, 4.5), (21708, 1641, 1.5), (21708, 1682, 4.5)]
[(21708, 95, 3.5), (21708, 153, 1.5), (21708, 161, 4.0)]
1189844
25/04/22 06:25:52 ERROR TaskSetManager: Task 0 in stage 17.0 failed 4 times; aborting job
25/04/22 06:53:44 ERROR TaskSetManager: Task 0 in stage 17.0 failed 4 times; aborting job
Traceback (most recent call last):
File "/root/PycharmProjects/als_movie/collab_filter.py", line 54, in <module>
lambda_=regularizationParameter)
@ -24,7 +24,7 @@ Traceback (most recent call last):
File "/usr/local/bin/python3.6/lib/python3.6/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o135.trainALSModel.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 17.0 failed 4 times, most recent failure: Lost task 0.3 in stage 17.0 (TID 17, 100.64.0.10, executor 0): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 17.0 failed 4 times, most recent failure: Lost task 0.3 in stage 17.0 (TID 17, 100.64.0.11, executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/module/spark-2.4.8-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 364, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/opt/module/spark-2.4.8-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 69, in read_command