安装cuDNN报错:FreeImage is not set up correctly. Please ensure FreeImage is set up correctly.

现象

安装cuDNN报错:FreeImage is not set up correctly. Please ensure FreeImage is set up correctly.

什么是cuDNN

cuDNN(CUDA Deep Neural Network Library)是 NVIDIA 提供的 GPU 加速库,专门用于加速深度学习中的 卷积计算、RNN、池化(Pooling)、归一化(Normalization)等操作。
它是 深度学习框架(如 TensorFlow、PyTorch、MXNet) 运行在 NVIDIA GPU 上的核心加速组件之一。

解决方法

安装FreeImage

1
apt-get install libfreeimage3 libfreeimage-dev

Docker镜像 php:5.6-apache apt-get update报错:process /bin/sh -c apt-get update did not complete successfully: exit code: 100

现象

Docker镜像 php:5.6-apache apt-get update报错:
process “/bin/sh -c apt-get update” did not complete successfully: exit code: 100

原因

debian的包地址已归档,需要在Dockerfile里面修改指向到新地址。

解决方法

Dockerfile里面apt-get update之前加上下面的命令。

1
2
3
RUN echo "deb http://archive.debian.org/debian/ stretch main" > /etc/apt/sources.list \
&& echo "deb http://archive.debian.org/debian-security stretch/updates main" >> /etc/apt/sources.list
RUN apt-get update && apt-get install -y

LM Studio启动QwQ-32B报错:Error rendering prompt with jinja template: Error: Parser Error: Expected closing statement token. OpenSquareBracket !== CloseStatement. | 2025

LM Studio 加载QwQ-32b GGUF后,会话时报错。

错误内容

1
Error rendering prompt with jinja template: Error: Parser Error: Expected closing statement token. OpenSquareBracket !== CloseStatement.

解决方法

参考LM Studio Issue修改Prompt的提示模板为如下内容即可解决。

1
{%- if tools %} {{- '<|im_start|>system\n' }} {%- if messages[0]['role'] == 'system' %} {{- messages[0]['content'] }} {%- else %} {{- '' }} {%- endif %} {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} {%- else %} {%- if messages[0]['role'] == 'system' %} {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" and not message.tool_calls %} {%- set content = (message.content.split('</think>')|last).lstrip('\n') %} {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set content = (message.content.split('</think>')|last).lstrip('\n') %} {{- '<|im_start|>' + message.role }} {%- if message.content %} {{- '\n' + content }} {%- endif %} {%- for tool_call in message.tool_calls %} {%- if tool_call.function is defined %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '\n<tool_call>\n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {{- tool_call.arguments | tojson }} {{- '}\n</tool_call>' }} {%- endfor %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- message.content }} {{- '\n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- endif %}

LM Studio启动Deepseek-R1-distill-Qwen报错 | 2025

LM Studio 0.2.22下载Deepseek-R1-distill-Qwen-7b GGUF,加载模型报错
llama.cpp error: ‘error loading model vocabulary: unknown pre-tokenizer type: ‘deepseek-r1-qwen’

错误内容

1
json { "title": "Failed to load model", "cause": "llama.cpp error: 'error loading model vocabulary: unknown pre-tokenizer type: 'deepseek-r1-qwen''", "errorData": { "n_ctx": 2048, "n_batch": 512, "n_gpu_layers": 10 }, "data": { "memory": { "ram_capacity": "16.00 GB", "ram_unused": "3.40 GB" }, "gpu": { "type": "AppleMetal", "vram_recommended_capacity": "10.67 GB", "vram_unused": "2.85 GB" }, "os": { "platform": "darwin", "version": "15.2" }, "app": { "version": "0.2.22", "downloadsDir": "/Users/as/.cache/lm-studio/models" }, "model": {} } }```

解决方法

LM Studio官网下载最新版DMG软件替换现有版本。亲测0.3.9 版本可用。

注意:旧版本的LM Studio的自带更新功能只能升级到0.2.22,无法解决这个问题。需要重新下载替换!

pandas.DataFrame.to_excel报错:openpyxl.utils.exceptions.IllegalCharacterError

由于待处理的文件内容里含有非法字符,转换成excel的时候出错

openpyxl.utils.exceptions.IllegalCharacterError:
使用下面的函数清理下字符即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import pandas as pd

def clean_value(value):
""" This function cleans a value to remove illegal characters """
if isinstance(value, str):
# Define a regex pattern to match illegal characters
illegal_chars_pattern = re.compile(r'[\x00-\x08\x0B\x0C\x0E-\x1F\x7F-\x9F]')
# Replace illegal characters with an empty string
cleaned_value = illegal_chars_pattern.sub('', value)
if cleaned_value != value:
print(f"Illegal characters found in '{value}'. \nCleaned value: '{cleaned_value}'")
return cleaned_value
return value

def clean_dataframe(df):
""" This function cleans all values in a DataFrame to remove illegal characters """
return df.applymap(clean_value)

data = []
with open(file_path, 'r', encoding='utf-8') as file:
for line in file:
# Parse each JSON object
data.append(json.loads(line))

df = pd.DataFrame(data)
df = clean_dataframe(df)
df.to_excel(...)

Windows AD重置密码报错:Warning: ldap_mod_replace(): Modify: Insufficient access

现象

通过AD的某个账号绑定LDAP,使用ldap_mod_replace修改密码时报错:Warning: ldap_mod_replace(): Modify: Insufficient access
DEBUG模式的详细错误:res_error: <00000005: SecErr: DSID-031A11EF, problem 4003 (INSUFF_ACCESS_RIGHTS)

原因

绑定LDAP的用户不具有【重置密码(Reset password)】的权限,导致报错。

解决方法

创建用户组收容绑定LDAP的用户,并且通过委派控制(Delegate Control)来添加【重置密码(Reset password)】的权限即可成功修改。

修改密码实例的核心代码如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 建立连接
// 加密通信连接LDAP服务器,无视自签名错误
$ldapConn = ldap_connect('ldaps://ad.test.com', 636);
ldap_set_option($ldapConn, LDAP_OPT_PROTOCOL_VERSION, 3);
ldap_set_option($ldapConn, LDAP_OPT_REFERRALS, 0);
ldap_set_option(NULL, LDAP_OPT_X_TLS_REQUIRE_CERT, LDAP_OPT_X_TLS_NEVER);
// 开发环境可以开启DEBUG模式,便于排错
// ldap_set_option(NULL, LDAP_OPT_DEBUG_LEVEL, 7);

// 设置新密码
$entry["unicodePwd"] = iconv("UTF-8", "UTF-16LE", '"' . $newPassword . '"');

// 调用ldap_mod_replace修改密码。注意:$userDn 需要有【重置密码(Reset password)】的权限。
ldap_mod_replace($ldapConnection, $userDn, $entry);

一个相关错误的解释

###”LdapErr: DSID-0C09050E, comment: AcceptSecurityContext error
表示 LDAP 认证失败,可能的原因

  1. 用户密码已过期
  2. 用户需要在首次登录时更改密码,但当前客户端不支持密码更改
  3. 账户被锁定

顺便安利一个好用免费的域信息查询/排错工具

NetTools
NetTools 拥有超过 90 种功能,旨在帮助您排除故障、查询、报告和更新 Active Directory 和其他基于 LDAP 的目录,
是您进行 AD 故障排除的一站式助手。此外,借助其功能强大且功能丰富的 LDAP 客户端(拥有超过280 个预定义查询),您可以轻松简化 AD 管理。
最好的一点是什么?它完全免费!

Ubuntu下ollama的基本使用汇总 | 2025

安装和自启动

1
2
curl -fsSL https://ollama.com/install.sh | sh
systemctl enable ollama

版本更新

1
curl -fsSL https://ollama.com/install.sh | sh

直接执行模型,命令行交互

1
2
3
ollama run deepseek-llm
# 输入内容等待回答
/bye # 结束交互

其他常用操作

1
2
3
ollama list # 显示已安装模型列表
ollama ps # 查询活动的ollama进程,如果模型已经完整读入到GPU,PROCESSOR列会显示100% GPU
pgrep ollama | xargs -I% kill % # 结束活动的ollama进程

TroubleShoot1

docker容器无法连接宿主机的ollama服务

现象容器内部curl报错:
Failed to connect to 172.17.0.1 port 11434 after 0 ms: Connection refused
Failed to connect to host.docker.internal port 11434 after 0 ms: Connection refused
anythingllm报错:
Your Ollama instance could not be reached or is not responding. Please make sure it is running the API server and your connection information is correct in AnythingLLM.

原因

ollama服务默认监听127.0.0.1的11434端口,无法为容器提供服务。
127.0.0.1 和 0.0.0.0 的监听区别:

  • 127.0.0.1(localhost 监听)
    仅允许 本机访问,外部设备或 Docker 容器 无法访问。
  • 0.0.0.0(全网监听)
    允许来自 任何网络接口 的连接,包括本机、局域网和互联网。

解决方法

修改ollama服务的配置,通过0.0.0.0监听

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
systemctl edit ollama.service
-------------------------------------
# 在注释行中间添加配置,报存退出
### Editing /etc/systemd/system/ollama.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

### Lines below this comment will be discarded
-------------------------------------
systemctl daemon-reload
systemctl restart ollama

# MacOS下,执行如下命令并重启ollama服务
launchctl setenv OLLAMA_HOST "0.0.0.0"

# Windows下,添加用户环境变量,然后重启ollama服务
OLLAMA_HOST 的值为 0.0.0.0

安装GNS3进行模拟网络环境时安装VM Player

GNS3推荐使用VMware的虚拟机,不安装会报错。

1
VMware vmrun tool could not be found, VMware or the VIX API (required for VMware player) is probably not installed. You can download it from https://customerconnect.vmware.com/downloads/details?downloadGroup=PLAYER-1400-VIX1170&productId=687. After installation you need to restart GNS3.

自从高通收购VMware之后,一直在整合文档和软件下载链接。
高通产品太多导致软件下载链接特别难找,这次简单整理下。
VMware Workstation Player发布说明 高通版

VMware Workstation Player下载链接 高通版

  1. 从Software 菜单选择 VMware Cloud Foundation 点击 My Downloads
  2. 翻页找到 VMware Workstation Player 点击
  3. 选择版本下载

其他软件例如Mac下用的 VMware Fusion等同理。

再见 Mac下可用的VMware Fusion 文档 20250131到期