Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在Jetson平台下编译出错 #29

Open
sonkyokukou opened this issue Jan 26, 2025 · 4 comments
Open

在Jetson平台下编译出错 #29

sonkyokukou opened this issue Jan 26, 2025 · 4 comments
Labels
stale The issue or pull request alreay inactives

Comments

@sonkyokukou
Copy link

sonkyokukou commented Jan 26, 2025

错误信息如下:

/media/sunxu/llm/gpustack/llama-box/llama-box/utils.hpp: In function ‘std::string format_chat(const llama_model*, const common_chat_template&, const std::vector<nlohmann::json_abi_v3_11_3::basic_json<nlohmann::json_abi_v3_11_3::ordered_map>, std::allocator<nlohmann::json_abi_v3_11_3::basic_json<nlohmann::json_abi_v3_11_3::ordered_map> > >&, const std::vector<nlohmann::json_abi_v3_11_3::basic_json<nlohmann::json_abi_v3_11_3::ordered_map>, std::allocator<nlohmann::json_abi_v3_11_3::basic_json<nlohmann::json_abi_v3_11_3::ordered_map> > >&, const string&, bool)’:
/media/sunxu/llm/gpustack/llama-box/llama-box/utils.hpp:428:9: error: cannot convert ‘const common_chat_template’ éaka ‘const minja::chat_template’è to ‘const llama_model*’
  428 ù         tmpl,
      ù         ^¨¨¨
      ù         ù
      ù         const common_chat_template éaka const minja::chat_templateè
In file included from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:1:
/media/sunxu/llm/gpustack/llama-box/llama.cpp/common/common.h:636:67: note:   initializing argument 1 of ‘std::string common_chat_apply_template(const llama_model*, const common_chat_template&, const std::vector<common_chat_msg>&, const std::vector<common_chat_func>&, bool, bool, bool)’
  636 ù std::string common_chat_apply_template(const struct llama_model * model,
      ù                                        ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨^¨¨¨¨
In file included from /media/sunxu/llm/gpustack/llama-box/llama-box/rpcserver.hpp:46,
                 from /media/sunxu/llm/gpustack/llama-box/llama-box/param.hpp:12,
                 from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:13:
/media/sunxu/llm/gpustack/llama-box/llama-box/utils.hpp:344:58: warning: unused parameter ‘model’ °-Wunused-parameter§
  344 ù inline std::string format_chat(const struct llama_model *model, const common_chat_template &tmpl, const std::vector<json> &messages, const std::vector<json> &functions, const std::string &functions_call_mode, const bool use_jinja) é
      ù                                ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨^¨¨¨¨
In file included from /media/sunxu/llm/gpustack/llama-box/llama-box/rpcserver.hpp:46,
                 from /media/sunxu/llm/gpustack/llama-box/llama-box/param.hpp:12,
                 from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:13:
/media/sunxu/llm/gpustack/llama-box/llama-box/utils.hpp: At global scope:
/media/sunxu/llm/gpustack/llama-box/llama-box/utils.hpp:1961:6: warning: no previous declaration for ‘void common_batch_add_with_mrope(llama_batch&, llama_token, llama_pos, int32_t, const std::vector<int>&, bool)’ °-Wmissing-declarations§
 1961 ù void common_batch_add_with_mrope(
      ù      ^¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨
In file included from /media/sunxu/llm/gpustack/llama-box/llama-box/param.hpp:12,
                 from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:13:
/media/sunxu/llm/gpustack/llama-box/llama-box/rpcserver.hpp: In function ‘void rpcserver_get_backend_memory(ggml_backend_t, int32_t, size_t*, size_t*)’:
/media/sunxu/llm/gpustack/llama-box/llama-box/rpcserver.hpp:194:57: warning: unused parameter ‘backend’ °-Wunused-parameter§
  194 ù static void rpcserver_get_backend_memory(ggml_backend_t backend, int32_t gpu, size_t *free_mem, size_t *total_mem) é
      ù                                          ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨^¨¨¨¨¨¨
In file included from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:13:
/media/sunxu/llm/gpustack/llama-box/llama-box/param.hpp: In function ‘void add_rpc_devices(std::string)’:
/media/sunxu/llm/gpustack/llama-box/llama-box/param.hpp:126:41: error: too few arguments to function ‘void ggml_backend_device_register(ggml_backend_dev_t, bool)’
  126 ù             ggml_backend_device_register(dev);
      ù             ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨^¨¨¨¨
In file included from /media/sunxu/llm/gpustack/llama-box/llama.cpp/ggml/src/../include/ggml-cpu.h:4,
                 from /media/sunxu/llm/gpustack/llama-box/llama.cpp/src/../include/llama.h:5,
                 from /media/sunxu/llm/gpustack/llama-box/llama.cpp/src/../include/llama-cpp.h:9,
                 from /media/sunxu/llm/gpustack/llama-box/llama.cpp/common/common.h:5,
                 from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:1:
/media/sunxu/llm/gpustack/llama-box/llama.cpp/ggml/src/../include/ggml-backend.h:206:19: note: declared here
  206 ù     GGML_API void ggml_backend_device_register(ggml_backend_dev_t device, bool front);
      ù                   ^¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨
In file included from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:13:
/media/sunxu/llm/gpustack/llama-box/llama-box/param.hpp: In function ‘bool llama_box_params_parse(int, char**, llama_box_params&)’:
/media/sunxu/llm/gpustack/llama-box/llama-box/param.hpp:661:59: warning: missing initializer for member ‘common_adapter_lora_info::ptr’ °-Wmissing-field-initializers§
  661 ù                 params_.llm_params.lora_adapters.push_back(é
      ù                 ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨^¨
  662 ù                     std::string(arg),
      ù                     ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨
  663 ù                     1.0f,
      ù                     ¨¨¨¨¨
  664 ù                 è);
      ù                 ¨¨
/media/sunxu/llm/gpustack/llama-box/llama-box/param.hpp:677:59: warning: missing initializer for member ‘common_adapter_lora_info::ptr’ °-Wmissing-field-initializers§
  677 ù                 params_.llm_params.lora_adapters.push_back(é
      ù                 ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨^¨
  678 ù                     std::string(n),
      ù                     ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨
  679 ù                     std::stof(std::string(s)),
      ù                     ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨
  680 ù                 è);
      ù                 ¨¨
In file included from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:3:
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp: In member function ‘bool server_context::load_model(llama_box_params&)’:
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:1251:52: error: cannot convert ‘minja::chat_template’ to ‘const llama_model*’
 1251 ù                         common_chat_format_example(*chat_templates.template_default, llm_params.use_jinja).c_str());
      ù                                                    ^¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨
      ù                                                    ù
      ù                                                    minja::chat_template
/media/sunxu/llm/gpustack/llama-box/llama.cpp/common/log.h:75:56: note: in definition of macro ‘LOG_TMPL’
   75 ù             common_log_add(common_log_main(), (level), __VA_ARGS__); ç
      ù                                                        ^¨¨¨¨¨¨¨¨¨¨
/media/sunxu/llm/gpustack/llama-box/llama-box/utils.hpp:50:27: note: in expansion of macro ‘LOG_INF’
   50 ù £define SRV_INF(fmt, ...) LOG_INF("srv %25.*s: " fmt, 25, __func__, __VA_ARGS__)
      ù                           ^¨¨¨¨¨¨
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:1246:17: note: in expansion of macro ‘SRV_INF’
 1246 ù                 SRV_INF("chat template, built_in: %s, alias: %s, tool call: %s, example:çn%sçn",
      ù                 ^¨¨¨¨¨¨
In file included from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:1:
/media/sunxu/llm/gpustack/llama-box/llama.cpp/common/common.h:654:32: note:   initializing argument 1 of ‘std::string common_chat_format_example(const llama_model*, const common_chat_template&, bool, bool)’
  654 ù     const struct llama_model * model, const common_chat_template & tmpl, bool use_jinja, bool display_funcs);
      ù     ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨^¨¨¨¨
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp: In lambda function:
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:4074:16: warning: unused parameter ‘step’ °-Wunused-parameter§
 4074 ù         °§(int step, int steps, float time, void * /*user_data*/) é
      ù            ¨¨¨¨^¨¨¨
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:4074:26: warning: unused parameter ‘steps’ °-Wunused-parameter§
 4074 ù         °§(int step, int steps, float time, void * /*user_data*/) é
      ù                      ¨¨¨¨^¨¨¨¨
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:4074:39: warning: unused parameter ‘time’ °-Wunused-parameter§
 4074 ù         °§(int step, int steps, float time, void * /*user_data*/) é
      ù                                 ¨¨¨¨¨¨^¨¨¨
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp: In lambda function:
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:4226:22: warning: unused variable ‘t_image_processing’ °-Wunused-variable§
 4226 ù             uint64_t t_image_processing            = data.at("t_image_processing");
      ù                      ^¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:4204:61: warning: unused parameter ‘req’ °-Wunused-parameter§
 4204 ù     const auto handle_metrics = °&§(const httplib::Request &req, httplib::Response &res) é
      ù                                     ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨^¨¨
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp: In lambda function:
/media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:4563:65: warning: unused parameter ‘req’ °-Wunused-parameter§
 4563 ù     const auto handle_slots_erase = °&§(const httplib::Request &req, httplib::Response &res, int id_slot) é
      ù                                         ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨^¨¨
In file included from /media/sunxu/llm/gpustack/llama-box/llama-box/rpcserver.hpp:31,
                 from /media/sunxu/llm/gpustack/llama-box/llama-box/param.hpp:12,
                 from /media/sunxu/llm/gpustack/llama-box/llama-box/server.cpp:13:
/media/sunxu/llm/gpustack/llama-box/llama.cpp/ggml/src/ggml-backend-impl.h: At global scope:
/media/sunxu/llm/gpustack/llama-box/llama.cpp/ggml/src/ggml-backend-impl.h:210:16: warning: ‘ngl’ defined but not used °-Wunused-variable§
  210 ù     static int ngl = -1;
      ù                ^¨¨
make°2§: *** °llama-box/CMakeFiles/llama-box.dir/build.make:76:llama-box/CMakeFiles/llama-box.dir/server.cpp.o§ 错误 1
make°1§: *** °CMakeFiles/Makefile2:750:llama-box/CMakeFiles/llama-box.dir/all§ 错误 2
make: *** °Makefile:136:all§ 错误 2
@sonkyokukou
Copy link
Author

patch.diff.txt
这样修改了一下,编译成功了。

@thxCode
Copy link
Collaborator

thxCode commented Jan 27, 2025

@sonkyokukou you can try with the HEAD now, we pull the changes from llama.cpp upstream and fix the rpcserver position problem, so this should be work in jetson.

@sonkyokukou
Copy link
Author

@thxCode 好的我再试一下。

Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale The issue or pull request alreay inactives label Feb 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale The issue or pull request alreay inactives
Projects
None yet
Development

No branches or pull requests

2 participants