银行卡开户%%https://milaiai.github.io/blog/post/bank/%%2024-05-08%%香港银行卡开户 实体银行 大家好,今天给大家分享一下我在香港开户的经历. 我是今年5月1号劳动节前去的香港. 我在去之前也看了几个相关视频,做了些准备工作.但是去了香港之后才发现,真实情况与那些视频里介绍的还是有很大不同的.今天跟大家分享一下我的开户经历.
先介绍一下实体银行,我这里分了三列.第一列是我申请成功的,是不需要预约的,每家银行直接去银行,半天左右的时间可以办完. 不过银行里开户的人可以会很多,大部分时间可能都是在排队.建议大家能预约的话,尽量提前预约一下.
汇丰是最好申请的,只需要提供港澳通行证及入境小票.开完户,存几百元就行了. 恒生银行之前是不给大陆客户开户的,近期才允许大陆客户开户的.恒生银行开户要谨慎很多.要看你三个月以内的地址证明,收入明细等等. 中国银行(香港)开户要注意的是,要提供纸质版的地址证明. 我去的时候只带了电子版的,没有带纸质版的,银行让我去打印社(快图美)打印了一份.香港打印费是很贵的,打印一张彩色的需要8元. 开户完如果存款1万的话,可以当场拿卡. 不然的话,存一千元,卡片会邮寄给你.中国银行每笔国内汇款入帐都要收取手续费. 第二列中字头的银行都是需要事先预约的.我去的时候工行亚洲预约都已经预约到7月份了.如果大家要开中字头的银行的话,建议大家提前一个月预约. 中信国际有的网点不预约也行,我去的那家中信银行,让我必须买保险才能办理开户.保险是5年的封闭期,年利率5.3%, 我没有办理.
第三列的花旗银行,这个适合高端人群,如果你有150万的话,可以选择开这个.没有的话可能也能开,不过可能有些条件. 比如说每年有管理费之类的.
开户材料 开户材料都是比较好准备的. 地址证明要精确到门牌号. 可以使用信用卡账单. 如果申请中字头银行,最好事先打印下来.
现金建议带1万人民币,不用换成港元.
有些银行需要提供资金证明,资金证明可以是工资流水,国内或国外证券账户. 如果申请恒生银行的话,可能需要提供10万以上的存款证明或者证券账户.
开户地址可以选在尖沙咀附近,这里遍地都是银行,而且这附近有不少比较便宜的旅店. 如果你打算当天就离港,不住宿的话,你可以在太和地铁站附近开户. 太和距离罗湖口岸很近.
虚拟银行 这几家虚拟银行,
蚂蚁银行, 众安银行, 天星银行, 理慧銀行, 富融銀行. 开户都很简单,只要你人在香港就能开户. 你可以在手机应用商城,例如Google Play里下载应APP. 全程手机上开通. 大家可以在排队开户的时候顺便把他们开了.
谢谢大家的收看,以上内容仅供参考.
参考材料 恒生银行开户材料 %%香港银行卡开户 实体银行 大家好,今天给大家分享一下我在香港开户的经历. 我是今年5月1号劳动节前去的香港. 我在去之前也看了几个相关视频,做了些准备工作.但是去了香港之后才发现,真实情况与那些视频里介绍的还是有很大不同的.今天跟大家分享一下我的开户经历.
先介绍一下实体银行,我这里分了三列.第一列是我申请成功的,是不需要预约的,每家银行直接去银行,半天左右的时间可以办完. 不过银行里开户的人可以会很多,大部分时间可能都是在排队.建议大家能预约的话,尽量提前预约一下.
汇丰是最好申请的,只需要提供港澳通行证及入境小票.开完户,存几百元就行了. 恒生银行之前是不给大陆客户开户的,近期才允许大陆客户开户的.恒生银行开户要谨慎很多.要看你三个月以内的地址证明,收入明细等等. 中国银行(香港)开户要注意的是,要提供纸质版的地址证明. 我去的时候只带了电子版的,没有带纸质版的,银行让我去打印社(快图美)打印了一份.香港打印费是很贵的,打印一张彩色的需要8元. 开户完如果存款1万的话,可以当场拿卡. 不然的话,存一千元,卡片会邮寄给你.中国银行每笔国内汇款入帐都要收取手续费. 第二列中字头的银行都是需要事先预约的.我去的时候工行亚洲预约都已经预约到7月份了.如果大家要开中字头的银行的话,建议大家提前一个月预约. 中信国际有的网点不预约也行,我去的那家中信银行,让我必须买保险才能办理开户.保险是5年的封闭期,年利率5.3%, 我没有办理.
第三列的花旗银行,这个适合高端人群,如果你有150万的话,可以选择开这个.没有的话可能也能开,不过可能有些条件. 比如说每年有管理费之类的.
开户材料 开户材料都是比较好准备的. 地址证明要精确到门牌号. 可以使用信用卡账单. 如果申请中字头银行,最好事先打印下来.
现金建议带1万人民币,不用换成港元.
有些银行需要提供资金证明,资金证明可以是工资流水,国内或国外证券账户. 如果申请恒生银行的话,可能需要提供10万以上的存款证明或者证券账户.
开户地址可以选在尖沙咀附近,这里遍地都是银行,而且这附近有不少比较便宜的旅店. 如果你打算当天就离港,不住宿的话,你可以在太和地铁站附近开户. 太和距离罗湖口岸很近.
虚拟银行 这几家虚拟银行,
蚂蚁银行, 众安银行, 天星银行, 理慧銀行, 富融銀行. 开户都很简单,只要你人在香港就能开户. 你可以在手机应用商城,例如Google Play里下载应APP. 全程手机上开通. 大家可以在排队开户的时候顺便把他们开了.
谢谢大家的收看,以上内容仅供参考.
参考材料 恒生银行开户材料 $$$
Pytorch + C++ + CUDA%%https://milaiai.github.io/blog/post/pytorch-cuda/%%2024-04-20%%Introduction pytorch -> C++ -> CUDA pybind: call C++ from python CUDA GPU architechture CUDA: grid -> block->thread 为什么要有block这个中间层? block 上限: $(2^{31}-1)*2^{16}*2^{16}$ Thread上限: 1024 Environment Building conda create -n cppcuda python=3.8
conda activate cppcuda
Install pytorch
python -m pip install pip -U pip3 install torch torchvision torchaudio pytorch path
how to check path
import torch print(torch.__file__) path example: "/usr/include/python3.8", "/home/.local/lib/python3.8/site-packages/torch/include/", "/home/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include" Python setup Example for CppExtension from setuptools import setup from torch.%%Introduction pytorch -> C++ -> CUDA pybind: call C++ from python CUDA GPU architechture CUDA: grid -> block->thread 为什么要有block这个中间层? block 上限: $(2^{31}-1)*2^{16}*2^{16}$ Thread上限: 1024 Environment Building conda create -n cppcuda python=3.8
conda activate cppcuda
Install pytorch
python -m pip install pip -U pip3 install torch torchvision torchaudio pytorch path
how to check path
import torch print(torch.__file__) path example: "/usr/include/python3.8", "/home/.local/lib/python3.8/site-packages/torch/include/", "/home/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include" Python setup Example for CppExtension from setuptools import setup from torch.utils.cpp_extension import BuildExtension, CppExtension setup( name='cppcuda_tutorial', version='1.0', author='qingbao', author_email='<EMAIL>', description='cppcuda example', long_description='A tutorial for using C++ and CUDA in PyTorch', long_description_content_type='text/markdown', ext_modules=[ CppExtension( name='cppcuda_tutorial', sources=['interplation.cpp']), ], cmdclass={ 'build_ext': BuildExtension}, ) General use case non parallel computation, e.g. volume rendering lots of sequential computation Example: linear interpolation Bilinear interpolation Trilinear interpolation Input
feats: (N, 8, F) N: 多少个volume 8: 8个顶点 F: 每个顶点的特征 Output
points: (N, 3) 差值后的点 平行运算:
N个点 F平行 如何计算block大小
const int N = feats.size(0); const int F = feats.size(2); // torch::zeros({N, F}, torch::dtype(torch::kInt32).device(feats.device())); torch::Tensor feat_interp = torch::zeros({N, F}, feats.options()); const dim3 threads(16, 16); // max threads:256, two dimension, each use 16 threads const blocks((N + threads.x -1)/threads.x, (F + threads.y -1)/threads.y); Issues unsupported clang version /usr/local/cuda/bin/../targets/x86_64-linux/include/crt/host_config.h:151:2: [cmake] error: -- unsupported clang version! clang version must be less than 16 and [cmake] greater than 3.2 . The nvcc flag '-allow-unsupported-compiler' can be used [cmake] to override this version check; however, using an unsupported host compiler [cmake] may cause compilation failure or incorrect run time execution. Use at your [cmake] own risk. [cmake] [cmake] 151 | #error -- unsupported clang version! clang version must be less than 16 and greater than 3.2 . The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. [cmake] | ^ [cmake] [cmake] 1 error generated. Solution:
Use clang less than 16.
set(CMAKE_C_COMPILER /usr/bin/clang-13) set(CMAKE_CXX_COMPILER /usr/bin/clang++-13) Failed to initialize NumPy .local/lib/python3.8/site-packages/torch/nn/modules/transformer.py:20: UserWarning: Failed to initialize NumPy: numpy.core.multiarray failed to import (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.) device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'), Solution:
python -m pip install pip -U setuptools _check_cuda_version(compiler_name, compiler_version) File "/home/qingbao/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 411, in _check_cuda_version raise ValueError("setuptools>=49.4.0 is required") ValueError: setuptools>=49.4.0 is required References Pytorch+cpp/cuda extension 教學 tutorial 1 https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#features-and-technical-specifications__technical-specifications-per-compute-capability https://github.com/kwea123/pytorch-cppcuda-tutorial $$$
Python%%https://milaiai.github.io/blog/post/python/%%2024-01-28%%Cuda CUDA Toolkit Archive
set env
export PATH=/usr/local/cuda/bin:$PATH CUDA Toolkit 12.3 wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget https://developer.download.nvidia.com/compute/cuda/12.3.2/local_installers/cuda-repo-ubuntu2004-12-3-local_12.3.2-545.23.08-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu2004-12-3-local_12.3.2-545.23.08-1_amd64.deb sudo cp /var/cuda-repo-ubuntu2004-12-3-local/cuda-*-keyring.gpg /usr/share/keyrings/ sudo apt-get update sudo apt-get -y install cuda-toolkit-12-3 Install Driver
sudo apt-get install -y cuda-drivers sudo apt-get install -y nvidia-kernel-open-545 sudo apt-get install -y cuda-drivers-545 Cudnn https://developer.nvidia.com/rdp/cudnn-archive Conda https://docs.anaconda.com/free/anaconda/install/linux/ wget -c https://repo.anaconda.com/archive/Anaconda3-2023.09-0-Linux-x86_64.sh pip pip使用国内镜像源 Usefusl pip3 install numpy -i https://pypi.tuna.tsinghua.edu.cn/simple ~/.pip/pip.conf [global] index-url = https://pypi.%%Cuda CUDA Toolkit Archive
set env
export PATH=/usr/local/cuda/bin:$PATH CUDA Toolkit 12.3 wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget https://developer.download.nvidia.com/compute/cuda/12.3.2/local_installers/cuda-repo-ubuntu2004-12-3-local_12.3.2-545.23.08-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu2004-12-3-local_12.3.2-545.23.08-1_amd64.deb sudo cp /var/cuda-repo-ubuntu2004-12-3-local/cuda-*-keyring.gpg /usr/share/keyrings/ sudo apt-get update sudo apt-get -y install cuda-toolkit-12-3 Install Driver
sudo apt-get install -y cuda-drivers sudo apt-get install -y nvidia-kernel-open-545 sudo apt-get install -y cuda-drivers-545 Cudnn https://developer.nvidia.com/rdp/cudnn-archive Conda https://docs.anaconda.com/free/anaconda/install/linux/ wget -c https://repo.anaconda.com/archive/Anaconda3-2023.09-0-Linux-x86_64.sh pip pip使用国内镜像源 Usefusl pip3 install numpy -i https://pypi.tuna.tsinghua.edu.cn/simple ~/.pip/pip.conf [global] index-url = https://pypi.tuna.tsinghua.edu.cn/simple [install] trusted-host = https://pypi.tuna.tsinghua.edu.cn pip国内的一些镜像 阿里云 http://mirrors.aliyun.com/pypi/simple/ 中国科技大学 https://pypi.mirrors.ustc.edu.cn/simple/ 豆瓣(douban) http://pypi.douban.com/simple/ 清华大学 https://pypi.tuna.tsinghua.edu.cn/simple/ 中国科学技术大学 http://pypi.mirrors.ustc.edu.cn/simple/ Pytorch https://pytorch.org/ pip3 install torch torchvision torchaudio Pip source list https://www.cnblogs.com/chenjo/p/14071864.html
~/.pip/pip.conf
[global] index-url = https://pypi.tuna.tsinghua.edu.cn/simple [install] trusted-host=mirrors.aliyun.com Set source list for pip install pip install pymysql -i https://pypi.tuna.tsinghua.edu.cn/simple/ // 国内源 pip install 包名-i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com References pip安装包报错Could not find a version that satisfies the requirement pymysql (from versions: none) $$$
Ubuntu%%https://milaiai.github.io/blog/post/ubuntu/%%2024-01-28%%Locate package sudo updatedb locate eigen3 Source list Ubuntu 20.04 阿里云镜像源 deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse deb-src http://mirrors.%%Locate package sudo updatedb locate eigen3 Source list Ubuntu 20.04 阿里云镜像源 deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse 清华大学镜像源 deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse # deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse 网易镜像源 deb http://mirrors.163.com/ubuntu/ focal main restricted universe multiverse deb http://mirrors.163.com/ubuntu/ focal-security main restricted universe multiverse deb http://mirrors.163.com/ubuntu/ focal-updates main restricted universe multiverse deb http://mirrors.163.com/ubuntu/ focal-proposed main restricted universe multiverse deb http://mirrors.163.com/ubuntu/ focal-backports main restricted universe multiverse deb-src http://mirrors.163.com/ubuntu/ focal main restricted universe multiverse deb-src http://mirrors.163.com/ubuntu/ focal-security main restricted universe multiverse deb-src http://mirrors.163.com/ubuntu/ focal-updates main restricted universe multiverse deb-src http://mirrors.163.com/ubuntu/ focal-proposed main restricted universe multiverse deb-src http://mirrors.163.com/ubuntu/ focal-backports main restricted universe multiverse 中科大镜像源 deb https://mirrors.ustc.edu.cn/ubuntu/ focal main restricted universe multiverse deb-src https://mirrors.ustc.edu.cn/ubuntu/ focal main restricted universe multiverse deb https://mirrors.ustc.edu.cn/ubuntu/ focal-updates main restricted universe multiverse deb-src https://mirrors.ustc.edu.cn/ubuntu/ focal-updates main restricted universe multiverse deb https://mirrors.ustc.edu.cn/ubuntu/ focal-backports main restricted universe multiverse deb-src https://mirrors.ustc.edu.cn/ubuntu/ focal-backports main restricted universe multiverse deb https://mirrors.ustc.edu.cn/ubuntu/ focal-security main restricted universe multiverse deb-src https://mirrors.ustc.edu.cn/ubuntu/ focal-security main restricted universe multiverse deb https://mirrors.ustc.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse deb-src https://mirrors.ustc.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse $$$
Colmap%%https://milaiai.github.io/blog/post/colmap/%%2024-01-19%%Install Get repo git clone https://github.com/colmap/colmap https://colmap.github.io/install.html install dependencies sudo apt-get install \ git \ cmake \ ninja-build \ build-essential \ libboost-program-options-dev \ libboost-filesystem-dev \ libboost-graph-dev \ libboost-system-dev \ libeigen3-dev \ libflann-dev \ libfreeimage-dev \ libmetis-dev \ libgoogle-glog-dev \ libgtest-dev \ libsqlite3-dev \ libglew-dev \ qtbase5-dev \ libqt5opengl5-dev \ libcgal-dev \ libceres-dev vim CMakeLists.txt set(CMAKE_CUDA_ARCHITECTURES 75 86) build git clone https://github.com/colmap/colmap.git cd colmap mkdir build cd build cmake .. -GNinja ninja sudo ninja install %%Install Get repo git clone https://github.com/colmap/colmap https://colmap.github.io/install.html install dependencies sudo apt-get install \ git \ cmake \ ninja-build \ build-essential \ libboost-program-options-dev \ libboost-filesystem-dev \ libboost-graph-dev \ libboost-system-dev \ libeigen3-dev \ libflann-dev \ libfreeimage-dev \ libmetis-dev \ libgoogle-glog-dev \ libgtest-dev \ libsqlite3-dev \ libglew-dev \ qtbase5-dev \ libqt5opengl5-dev \ libcgal-dev \ libceres-dev vim CMakeLists.txt set(CMAKE_CUDA_ARCHITECTURES 75 86) build git clone https://github.com/colmap/colmap.git cd colmap mkdir build cd build cmake .. -GNinja ninja sudo ninja install $$$
gaussian_splatting%%https://milaiai.github.io/blog/post/gaussian-splatting/%%2024-01-19%%Install Cuda: 12.2 conda env; conda create -n gaussian_splatting python=3.8 conda activate gaussian_splatting pip3 install torch torchvision torchaudio pip3 install plyfile tqdm pillow pip3 install submodules/diff-gaussian-rasterization pip3 install submodules/simple-knn %%Install Cuda: 12.2 conda env; conda create -n gaussian_splatting python=3.8 conda activate gaussian_splatting pip3 install torch torchvision torchaudio pip3 install plyfile tqdm pillow pip3 install submodules/diff-gaussian-rasterization pip3 install submodules/simple-knn $$$
Dictionary%%https://milaiai.github.io/blog/post/dict/%%2024-01-10%%Grammer check https://quillbot.com/ https://languagetool.org/ https://www.gingersoftware.com/grammarcheck https://app.linguix.com/docs/my deepl https://www.deepl.com/ Fairy https://github.com/revir/FairyDict Umi-OCR https://github.com/hiroi-sora/Umi-OCR gnome-dictionary sudo apt install gnome-dictionary GoldenDict sudo apt install goldendict artha sudo apt install artha eudic https://www.eudic.net/v4/en/app/download
wordnet-gui sudo apt install wordnet-gui iciba, 写作校对 https://www.iciba.com/grammar
Dict sudo apt install dict Dict download https://github.com/colordict/colordict.github.io/tree/master https://www.mdict.cn/wp/?page_id=5227&lang=zh %%Grammer check https://quillbot.com/ https://languagetool.org/ https://www.gingersoftware.com/grammarcheck https://app.linguix.com/docs/my deepl https://www.deepl.com/ Fairy https://github.com/revir/FairyDict Umi-OCR https://github.com/hiroi-sora/Umi-OCR gnome-dictionary sudo apt install gnome-dictionary GoldenDict sudo apt install goldendict artha sudo apt install artha eudic https://www.eudic.net/v4/en/app/download
wordnet-gui sudo apt install wordnet-gui iciba, 写作校对 https://www.iciba.com/grammar
Dict sudo apt install dict Dict download https://github.com/colordict/colordict.github.io/tree/master https://www.mdict.cn/wp/?page_id=5227&lang=zh $$$
Android Socket%%https://milaiai.github.io/blog/post/socket_android/%%2023-12-16%%A Simple TCP client of NIST time server Demo: AndroidTcpDemo-GetNISTTime
Add Internet Permission to AndroidManifest.xml
<uses-permission android:name="android.permission.INTERNET" /> Main Activity
// This is a simple Client app example to get NIST time package com.yubao.androidtcpdemo; import androidx.appcompat.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.TextView; import com.yubao.androidtcpdemo.databinding.ActivityMainBinding; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.Socket; public class MainActivity extends AppCompatActivity { // Used to load the 'androidtcpdemo' library on application startup. static { System.%%A Simple TCP client of NIST time server Demo: AndroidTcpDemo-GetNISTTime
Add Internet Permission to AndroidManifest.xml
<uses-permission android:name="android.permission.INTERNET" /> Main Activity
// This is a simple Client app example to get NIST time package com.yubao.androidtcpdemo; import androidx.appcompat.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.TextView; import com.yubao.androidtcpdemo.databinding.ActivityMainBinding; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.Socket; public class MainActivity extends AppCompatActivity { // Used to load the 'androidtcpdemo' library on application startup. static { System.loadLibrary("androidtcpdemo"); } private ActivityMainBinding binding; private TextView tvTime; private String serverName = "time.nist.gov"; private int serverPort = 13; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); binding = ActivityMainBinding.inflate(getLayoutInflater()); setContentView(binding.getRoot()); // Example of a call to a native method // TextView tv = binding.sampleText; // tv.setText(stringFromJNI()); } public void onClickGetTime(View view) { tvTime = findViewById(R.id.tvTime); NistTimeClient runable = new NistTimeClient(serverName, serverPort); new Thread(runable).start(); } private class NistTimeClient implements Runnable{ private String serverName; private int serverPort; public NistTimeClient(String serverName, int serverPort) { this.serverName = serverName; this.serverPort = serverPort; } @Override public void run() { try { Socket socket = new Socket(serverName, serverPort); BufferedReader br = new BufferedReader(new InputStreamReader((socket.getInputStream()))); br.readLine(); String recTime = br.readLine().substring(6, 23); socket.close(); runOnUiThread(new Runnable() { @Override public void run() { tvTime.setText(recTime); } }); } catch (IOException e) { throw new RuntimeException(e); } } } /** * A native method that is implemented by the 'androidtcpdemo' native library, * which is packaged with this application. */ public native String stringFromJNI(); } Design
<TextView android:id="@+id/tvTime" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="System Time Here" android:textSize="34sp" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" app:layout_constraintVertical_bias="0.215" /> <Button android:id="@+id/btnGetTime" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="34dp" android:onClick="onClickGetTime" android:text="Get NIST Time" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintHorizontal_bias="0.5" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@+id/tvTime" /> Client and Server Example Demo: AndroidTcpClientServer
Client demo: Server demo: Add Permission
<uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> Client-end src
package com.yubao.androidtcpclient; import androidx.appcompat.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.TextView; import com.yubao.androidtcpclient.databinding.ActivityMainBinding; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.Socket; public class MainActivity extends AppCompatActivity { // Used to load the 'androidtcpclient' library on application startup. static { System.loadLibrary("androidtcpclient"); } private ActivityMainBinding binding; // client example private TextView tvReceivedData; private EditText etServerName, etServerPort; private Button btnClientConnect; private String serverName; private int serverPort; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); binding = ActivityMainBinding.inflate(getLayoutInflater()); setContentView(binding.getRoot()); tvReceivedData = findViewById(R.id.tvReceivedData); etServerName = findViewById(R.id.etServerName); etServerPort = findViewById(R.id.etServerPort); btnClientConnect = findViewById(R.id.btnClientConnect); } public void onClictConnect(View view) { serverName = etServerName.getText().toString(); serverPort = Integer.valueOf(etServerPort.getText().toString()); new Thread(new Runnable() { @Override public void run() { try { Socket socket = new Socket(serverName, serverPort); BufferedReader br_input = new BufferedReader(new InputStreamReader(socket.getInputStream())); String txtFromServer = br_input.readLine(); runOnUiThread(new Runnable() { @Override public void run() { tvReceivedData.setText(txtFromServer); } }); } catch (IOException e) { throw new RuntimeException(e); } } }).start(); } public native String stringFromJNI(); } Server-end src
package com.yubao.androidtcpserver2; import androidx.appcompat.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.TextView; import java.io.IOException; import java.io.PrintStream; import java.io.PrintWriter; import java.net.ServerSocket; import java.net.Socket; public class ServerActivity extends AppCompatActivity { private TextView tvServerName, tvServerPort, tvStatus; private String serverIP = "127.0.0.1"; private int serverPort = 8899; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_server); tvServerName = findViewById(R.id.tvServerName); tvServerPort = findViewById(R.id.tvServerPort); tvStatus = findViewById(R.id.tvStatus); tvServerName.setText(serverIP ); tvServerPort.setText(String.valueOf(serverPort)); } private ServerThread serverThread; public void onCLiickServer(View view) { serverThread = new ServerThread(); serverThread.StartServer(); } public void onClickStopServer(View view){ serverThread.StopServer(); } class ServerThread extends Thread implements Runnable{ private boolean serverRunning; private ServerSocket serverSocket; private int count =0; public void StartServer() { serverRunning = true; start(); } @Override public void run() { try { serverSocket = new ServerSocket(serverPort); runOnUiThread(new Runnable() { @Override public void run() { tvStatus.setText("Waiting for clients"); } }); while(serverRunning) { Socket socket = serverSocket.accept(); count++; runOnUiThread(new Runnable() { @Override public void run() { tvStatus.setText("Connect to: " + socket.getInetAddress() + " : " +socket.getLocalPort()); } }); PrintWriter output_server = new PrintWriter(socket.getOutputStream()); output_server.write("Welcome to Server:" + count); output_server.flush(); socket.close(); } } catch (IOException e) { throw new RuntimeException(e); } } public void StopServer(){ serverRunning = false; new Thread(new Runnable() { @Override public void run() { if (serverSocket != null) { try { serverSocket.close(); runOnUiThread(new Runnable() { @Override public void run() { tvStatus.setText("Server Stopped"); } }); } catch (IOException e) { throw new RuntimeException(e); } } } }).start(); } }// class ServerThread } $$$
ROS Installation%%https://milaiai.github.io/blog/post/ros/%%2023-12-10%%Installation http://wiki.ros.org/ROS/Installation rosdep update Error Message:
Warning: running 'rosdep update' as root is not recommended. You should run 'sudo rosdep fix-permissions' and invoke 'rosdep update' again without sudo. ERROR: error loading sources list: ('The read operation timed out',) reading in sources list data from /etc/ros/rosdep/sources.list.d Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/releases/fuerte.yaml Query rosdistro index https://raw.githubusercontent.com/ros/rosdistro/master/index-v4.yaml Skip end-of-life distro "ardent" Skip end-of-life distro "bouncy" Skip end-of-life distro "crystal" Skip end-of-life distro "dashing" Skip end-of-life distro "eloquent" Add distro "foxy" ERROR: Service 'rdsslam' failed to build: The command '/bin/sh -c rosdep update' returned a non-zero code: 1 ``` # References - [rosdep update 超时失败2021最新解决方法](https://blog.%%Installation http://wiki.ros.org/ROS/Installation rosdep update Error Message:
Warning: running 'rosdep update' as root is not recommended. You should run 'sudo rosdep fix-permissions' and invoke 'rosdep update' again without sudo. ERROR: error loading sources list: ('The read operation timed out',) reading in sources list data from /etc/ros/rosdep/sources.list.d Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml Hit https://raw.githubusercontent.com/ros/rosdistro/master/releases/fuerte.yaml Query rosdistro index https://raw.githubusercontent.com/ros/rosdistro/master/index-v4.yaml Skip end-of-life distro "ardent" Skip end-of-life distro "bouncy" Skip end-of-life distro "crystal" Skip end-of-life distro "dashing" Skip end-of-life distro "eloquent" Add distro "foxy" ERROR: Service 'rdsslam' failed to build: The command '/bin/sh -c rosdep update' returned a non-zero code: 1 ``` # References - [rosdep update 超时失败2021最新解决方法](https://blog.csdn.net/Kenny_GuanHua/article/details/116845781) $$$
VINS FUSION%%https://milaiai.github.io/blog/post/vins_fusion/%%2022-04-16%%Introduction VINS-Fusion 是继 VINS-Mono 和 VINS-Mobile(单目视觉惯导 SLAM 方案)后,香港科技大学沈劭劼老师开源的双目视觉惯导 SLAM 方案,VINS-Fusion 是一种基于优化的多传感器状态估计器,可实现自主应用(无人机,汽车和 AR / VR)的精确自定位。 VINS-Fusion 是 VINS-Mono 的扩展,支持多种视觉惯性传感器类型(单目摄像机+ IMU,双目摄像机+ IMU,甚至仅限双目摄像机)。开源项目组还展示了将 VINS 与 GPS 融合的模组示例。
Build Get project git clone https://github.com/yubaoliu/VINS-Fusion -b dev Ubuntu 16 ROS OpenCV 3.x set(OpenCV_DIR "/home/yubao/Software/install/opencv_3.3.1/share/OpenCV") or export OpenCV_DIR="/home/yubao/Software/install/opencv_3.3.1/share/OpenCV" compile cd ROS_PROJECT_DIR catkin_make Problems 编译错误 修改这些变量,之前使用的是C语言版本的API,并没有包含相应的头文件,所以报措。理论上加入相应的C版本的头文件是可以通过的,但是这是API已经不再使用,建议直接改成新的。
CV_LOAD_IMAGE_GRAYSCALE -> cv::IMREAD_GRAYSCALE CV_GRAY2RGB -> cv::COLOR_RGB2GRAY CV_FONT_HERSHEY_SIMPLEX -> cv::FONT_HERSHEY_SIMPLEX 运行错误 Segment fault.
[ INFO] [1650104763.260207805]: reading paramerter of camera /home/yubao/catkin_ws/src/VINS-Fusion/config/euroc/cam0_mei.yaml double free or corruption (out) 调用顺序 rosNodeTest.%%Introduction VINS-Fusion 是继 VINS-Mono 和 VINS-Mobile(单目视觉惯导 SLAM 方案)后,香港科技大学沈劭劼老师开源的双目视觉惯导 SLAM 方案,VINS-Fusion 是一种基于优化的多传感器状态估计器,可实现自主应用(无人机,汽车和 AR / VR)的精确自定位。 VINS-Fusion 是 VINS-Mono 的扩展,支持多种视觉惯性传感器类型(单目摄像机+ IMU,双目摄像机+ IMU,甚至仅限双目摄像机)。开源项目组还展示了将 VINS 与 GPS 融合的模组示例。
Build Get project git clone https://github.com/yubaoliu/VINS-Fusion -b dev Ubuntu 16 ROS OpenCV 3.x set(OpenCV_DIR "/home/yubao/Software/install/opencv_3.3.1/share/OpenCV") or export OpenCV_DIR="/home/yubao/Software/install/opencv_3.3.1/share/OpenCV" compile cd ROS_PROJECT_DIR catkin_make Problems 编译错误 修改这些变量,之前使用的是C语言版本的API,并没有包含相应的头文件,所以报措。理论上加入相应的C版本的头文件是可以通过的,但是这是API已经不再使用,建议直接改成新的。
CV_LOAD_IMAGE_GRAYSCALE -> cv::IMREAD_GRAYSCALE CV_GRAY2RGB -> cv::COLOR_RGB2GRAY CV_FONT_HERSHEY_SIMPLEX -> cv::FONT_HERSHEY_SIMPLEX 运行错误 Segment fault.
[ INFO] [1650104763.260207805]: reading paramerter of camera /home/yubao/catkin_ws/src/VINS-Fusion/config/euroc/cam0_mei.yaml double free or corruption (out) 调用顺序 rosNodeTest.cpp:
estimator.setParameter(); //-> featureTracker.readIntrinsicParameter(CAM_NAMES); //-> camodocal::CameraPtr camera = CameraFactory::instance()->generateCameraFromYamlFile(calib_file[i]); //-> cv::FileStorage fs( filename, cv::FileStorage::READ ); Refer this error: https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/issues/106
这个问题我最终没有解决,错误原因应该是我使用的OpenCV是3.x,但是ROS noetic使用的OpenCV是4.x, ubuntu 20系统中的OpenCV也是4.x. 很可能是因为OpenCV版本冲突。
最终,放弃本机编译,决定使用docker,一劳永逸。
References Vins-Fusion安装记录 彻底搞懂视觉-惯性SLAM:vins-fusion原理精讲与源码剖析-视觉传感器部分 一起快速上手 VINS-Fusion $$$
NFS共享服务配置%%https://milaiai.github.io/blog/post/nfs/%%2022-04-05%%服务器 安装 sudo apt update sudo apt install nfs-kernel-server 配置文件 /etc/exports
/srv/nfs4 192.168.33.0/24(rw,sync,no_subtree_check,crossmnt,fsid=0) /srv/nfs4/backups 192.168.33.0/24(ro,sync,no_subtree_check) 192.168.33.3(rw,sync,no_subtree_check) /srv/nfs4/www 192.168.33.110(rw,sync,no_subtree_check) /mnt/nfs_share subnet(rw,sync,no_subtree_check) /var/nfs/general client_ip(rw,sync,no_subtree_check) /home client_ip(rw,sync,no_root_squash,no_subtree_check) fsid=0定义了 NFS 根目录 crossmnt选项是必要的,用来分享被导出目录的子目录 ro 该主机对该共享目录有只读权限 rw 该主机对该共享目录有读写权限, The client is granted both read and write permission to the volume. root_squash 客户机用root用户访问该共享文件夹时,将root用户映射成匿名用户 no_root_squash 客户机用root访问该共享文件夹时,不映射root用户, As mentioned earlier, NFS will translate any request from the remote root user to a non-privileged user. This is an intended security feature to prevent unwanted access to the host system.%%服务器 安装 sudo apt update sudo apt install nfs-kernel-server 配置文件 /etc/exports
/srv/nfs4 192.168.33.0/24(rw,sync,no_subtree_check,crossmnt,fsid=0) /srv/nfs4/backups 192.168.33.0/24(ro,sync,no_subtree_check) 192.168.33.3(rw,sync,no_subtree_check) /srv/nfs4/www 192.168.33.110(rw,sync,no_subtree_check) /mnt/nfs_share subnet(rw,sync,no_subtree_check) /var/nfs/general client_ip(rw,sync,no_subtree_check) /home client_ip(rw,sync,no_root_squash,no_subtree_check) fsid=0定义了 NFS 根目录 crossmnt选项是必要的,用来分享被导出目录的子目录 ro 该主机对该共享目录有只读权限 rw 该主机对该共享目录有读写权限, The client is granted both read and write permission to the volume. root_squash 客户机用root用户访问该共享文件夹时,将root用户映射成匿名用户 no_root_squash 客户机用root访问该共享文件夹时,不映射root用户, As mentioned earlier, NFS will translate any request from the remote root user to a non-privileged user. This is an intended security feature to prevent unwanted access to the host system. However, using this option will disable this behavior. all_squash 客户机上的任何用户访问该共享目录时都映射成匿名用户 anonuid 将客户机上的用户映射成指定的本地用户ID的用户 anongid 将客户机上的用户映射成属于指定的本地用户组ID sync 资料同步写入到内存与硬盘中, Forces NFS to write the changes to disk before replying. It offers a more stable and consistent experience. The reply will reflect the actual state of the remote volume. However, the file operations will be slower. async 资料会先暂存于内存中,而非直接写入硬盘 no_subtree_check: Prevents subtree checking. If not disabled, hosts will be forced to check the existence of the file in the exported tree for every single request from the client. It can lead to many problems, for example, a file is renamed while the client is using it. In most cases, disabling subtree checks is the way to go. insecure 允许从这台机器过来的非授权访问 绑定其他挂载目录
sudo mount --bind /opt/backups /srv/nfs4/backups sudo mount --bind /var/www /srv/nfs4/www 上面的绑定,重启之后便会失效,可以修改fstab文件实现永久绑定。
永久挂载: /etc/fstab
/opt/backups /srv/nfs4/backups none bind 0 0 /var/www /srv/nfs4/www none bind 0 0 使配置文件生效
sudo exportfs -ra 查看共享文件
sudo exportfs -v exportfs用法
-a :全部mount或者unmount /etc/exports中的内容 -r :重新mount /etc/exports中分享出来的目录 -u :umount 目录 -v :将详细的信息输出到屏幕上
防火墙 sudo ufw status sudo ufw enable sudo ufw disable sudo ufw status 允许某个IP或any访问nfs端口
sudo ufw allow nfs sudo ufw allow from 31.171.250.221 to any port nfs sudo ufw allow from any to any port nfs 查看端口
$rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 39242 mountd 100005 1 tcp 20048 mountd 100005 2 udp 52780 mountd 100005 2 tcp 20048 mountd 100005 3 udp 53401 mountd 100005 3 tcp 20048 mountd 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 100003 3 udp 2049 nfs 100227 3 udp 2049 100021 1 udp 42315 nlockmgr 100021 3 udp 42315 nlockmgr 100021 4 udp 42315 nlockmgr 100021 1 tcp 42315 nlockmgr 100021 3 tcp 42315 nlockmgr 100021 4 tcp 42315 nlockmgr 确认nfs相关服务组件及端口占用如下:
服务名称 端口名称 协议名称 备注 nfs 2049 tcp/udp 端口固定 portmapper 111 tcp/udp 端口固定 mountd 20048 tcp/udp 端口不固定,需人为修改固定 nlockmgr 42315 tcp/udp 端口不固定,需人为修改固定 更改mountd 服务端口为20048 echo "mountd 20048/tcp" >> /etc/services echo "mountd 20048/udp" >> /etc/services 更改nlockmgr 服务端口为42315 echo "fs.nfs.nlm_udpport=42315" >> /etc/sysctl.conf echo "fs.nfs.nlm_tcpport=42315" >> /etc/sysctl.conf sysctl -p nfs服务端防火墙开放相关服务固定端口 服务端防火墙开放2049 、111 、20048 、42315 端口,此时客户端可正常访问挂载 ufw allow 2049/tcp ufw allow 2049/udp ufw allow 111/tcp ufw allow 111/udp ufw allow 20048/tcp ufw allow 20048/udp ufw allow 42315/tcp ufw allow 42315/udp 客户端 sudo apt install nfs-common 挂载 $ sudo mount host_ip:/var/nfs/general /nfs/general $ sudo mount host_ip:/home /nfs/home 卸载 sudo umount /nfs/general 常用命令与技巧 versions $sudo cat /proc/fs/nfsd/versions -2 +3 +4 +4.1 +4.2 文件访问权限 sudo chown -R nobody:nogroup /mnt/nfs_share/ 重启服务 sudo systemctl restart nfs-kernel-server 查看挂载情况 $df -h $showmount -e IP 查看大小 du -sh /nfs/general 客户端nfs挂载协议与服务端不一致, 可以用nfsvers来指定NFS的版本 mount -t nfs -o nfsvers=3 x.x.x.x:/share /mnt nfs无法提供锁服务 使用远程锁:启动服务端rpc.statd服务,使用这个服务提供远程锁 使用本地锁:客户端挂载指定-o nolock,查看此时客户端挂载参数使用本地锁(local_lock=all) mount -t nfs -o nolock x.x.x.x:/share /mnt References Installing NFS on Ubuntu 20.04 server How to Install NFS Client and Server on Ubuntu 20.04 如何在 Ubuntu 18.04 上安装和配置 NFS 服务器 linux服务器 NFS + 防火墙配置 nfs常见问题处理 $$$
Github使用技巧%%https://milaiai.github.io/blog/post/git/%%2022-03-10%%使用Github的方法 利用镜像下载 通过代理网站下载 Gitee中转fork仓库下载 修改 HOSTS 文件进行加速 科学上网 (略) 加速网站 https://gitclone.com/ ghproxy GitHub 文件加速 https://gh.api.99988866.xyz http://toolwa.com/github/ https://github.zhlh6.cn https://fhefh2015.github.io/Fast-GitHub/ 浏览器插件 https://mirror.ghproxy.com/ https://www.github.do/ https://hub.0z.gs/ https://ghgo.feizhuqwq.workers.dev/ https://git.yumenaka.net/ 通过修改 HOSTS 文件进行加速 第一步:获取 github 的 global.ssl.fastly 地址 访问:http://github.global.ssl.fastly.net.ipaddress.com/#ipinfo 获取cdn和ip域名:
得到:199.232.69.194 https://github.global.ssl.fastly.net
第二步:获取github.com地址
访问:https://github.com.ipaddress.com/#ipinfo 获取cdn和ip:
得到:140.82.114.4 http://github.com
查询以下三个链接的DNS解析地址
github.com assets-cdn.github.com github.global.ssl.fastly.net 修改Host文件:
Windows: C:\Windows\System32\drivers\etc\hosts Linux: /etc/hosts 其它仓库 https://gitee.com/ GitHub raw 加速 GitHub raw 域名并非 github.com 而是 raw.githubusercontent.com,上方的 GitHub 加速如果不能加速这个域名,那么可以使用 Static CDN 提供的反代服务。
将 raw.githubusercontent.com 替换为 raw.staticdn.net 即可加速。%%使用Github的方法 利用镜像下载 通过代理网站下载 Gitee中转fork仓库下载 修改 HOSTS 文件进行加速 科学上网 (略) 加速网站 https://gitclone.com/ ghproxy GitHub 文件加速 https://gh.api.99988866.xyz http://toolwa.com/github/ https://github.zhlh6.cn https://fhefh2015.github.io/Fast-GitHub/ 浏览器插件 https://mirror.ghproxy.com/ https://www.github.do/ https://hub.0z.gs/ https://ghgo.feizhuqwq.workers.dev/ https://git.yumenaka.net/ 通过修改 HOSTS 文件进行加速 第一步:获取 github 的 global.ssl.fastly 地址 访问:http://github.global.ssl.fastly.net.ipaddress.com/#ipinfo 获取cdn和ip域名:
得到:199.232.69.194 https://github.global.ssl.fastly.net
第二步:获取github.com地址
访问:https://github.com.ipaddress.com/#ipinfo 获取cdn和ip:
得到:140.82.114.4 http://github.com
查询以下三个链接的DNS解析地址
github.com assets-cdn.github.com github.global.ssl.fastly.net 修改Host文件:
Windows: C:\Windows\System32\drivers\etc\hosts Linux: /etc/hosts 其它仓库 https://gitee.com/ GitHub raw 加速 GitHub raw 域名并非 github.com 而是 raw.githubusercontent.com,上方的 GitHub 加速如果不能加速这个域名,那么可以使用 Static CDN 提供的反代服务。
将 raw.githubusercontent.com 替换为 raw.staticdn.net 即可加速。
Tips IP 网址查询 ipaddress Update the http post buffer value git config --global http.postBuffer 1048576000 Common Questions gnutls_handshake() failed: The TLS connection was non-properly terminated Cloning into 'Sophus'... fatal: unable to access 'https://github.com/yubaoliu/Sophus.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated. ERROR: Service 'orbslam3' failed to build: The command '/bin/sh -c git clone https://github.com/yubaoliu/Sophus.git && cd Sophus && git checkout master && mkdir build && cd build && cmake .. -DCMAKE_BUILD_TYPE=Release && make -j3 && make install' returned a non-zero code: 128 Solution:
git config --global --unset https.https://github.com.proxy git config --global --unset http.https://github.com.proxy error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function. Error Message:
error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function. fatal: The remote end hung up unexpectedly fatal: early EOF fatal: index-pack failed References 2022目前三种有效加速国内Github hash $$$
Linux使用技巧%%https://milaiai.github.io/blog/post/linux/%%2022-03-10%%查看各文件大小 du -h --max-depth=1 查看剩余空间 ~ df . -h Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1p6 492G 457G 9.6G 98% / %%查看各文件大小 du -h --max-depth=1 查看剩余空间 ~ df . -h Filesystem Size Used Avail Use% Mounted on /dev/nvme1n1p6 492G 457G 9.6G 98% / $$$
Eigen%%https://milaiai.github.io/blog/post/eigen/%%2022-01-10%%1 In file included from /root/Pangolin/components/pango_opengl/include/pangolin/gl/gldraw.h:31:0, from /root/Pangolin/components/pango_opengl/src/gldraw.cpp:29: /root/Pangolin/components/pango_opengl/include/pangolin/gl/glformattraits.h:33:24: fatal error: Eigen/Core: No such file or directory compilation terminated. CMakeFiles/pango_opengl.dir/build.make:98: recipe for target 'CMakeFiles/pango_opengl.dir/components/pango_opengl/src/gldraw.cpp.o' failed make[2]: *** [CMakeFiles/pango_opengl.dir/components/pango_opengl/src/gldraw.cpp.o] Error 1 CMakeFiles/Makefile2:830: recipe for target 'CMakeFiles/pango_opengl.dir/all' failed make[1]: *** [CMakeFiles/pango_opengl.dir/all] Error 2 Makefile:148: recipe for target 'all' failed 出现这个问题首先要考虑是否安装了eigen库,可以进行以下命令检查:
sudo updatedb locate eigen3 如果没装,安装:
sudo apt-get install libeigen3-dev CMake:
set(Eigen3_DIR CMAKE_INSTALL_PREFIX/share/eigen3/cmake) find_package(Eigen3 3.3 REQUIRED) add_executable(optimization_benchmark optimization_benchmark.cpp) target_link_libraries(optimization_benchmark Eigen3::Eigen) 2 /root/Pangolin/components/pango_vars/include/pangolin/var/varstate.h:33:15: fatal error: any: No such file or directory compilation terminated.%%1 In file included from /root/Pangolin/components/pango_opengl/include/pangolin/gl/gldraw.h:31:0, from /root/Pangolin/components/pango_opengl/src/gldraw.cpp:29: /root/Pangolin/components/pango_opengl/include/pangolin/gl/glformattraits.h:33:24: fatal error: Eigen/Core: No such file or directory compilation terminated. CMakeFiles/pango_opengl.dir/build.make:98: recipe for target 'CMakeFiles/pango_opengl.dir/components/pango_opengl/src/gldraw.cpp.o' failed make[2]: *** [CMakeFiles/pango_opengl.dir/components/pango_opengl/src/gldraw.cpp.o] Error 1 CMakeFiles/Makefile2:830: recipe for target 'CMakeFiles/pango_opengl.dir/all' failed make[1]: *** [CMakeFiles/pango_opengl.dir/all] Error 2 Makefile:148: recipe for target 'all' failed 出现这个问题首先要考虑是否安装了eigen库,可以进行以下命令检查:
sudo updatedb locate eigen3 如果没装,安装:
sudo apt-get install libeigen3-dev CMake:
set(Eigen3_DIR CMAKE_INSTALL_PREFIX/share/eigen3/cmake) find_package(Eigen3 3.3 REQUIRED) add_executable(optimization_benchmark optimization_benchmark.cpp) target_link_libraries(optimization_benchmark Eigen3::Eigen) 2 /root/Pangolin/components/pango_vars/include/pangolin/var/varstate.h:33:15: fatal error: any: No such file or directory compilation terminated. CMakeFiles/pango_vars.dir/build.make:94: recipe for target 'CMakeFiles/pango_vars.dir/components/pango_vars/src/varstate.cpp.o' failed make[2]: *** [CMakeFiles/pango_vars.dir/components/pango_vars/src/varstate.cpp.o] Error 1 make[2]: *** Waiting for unfinished jobs.... In file included from /root/Pangolin/components/pango_vars/include/pangolin/var/var.h:37:0, from /root/Pangolin/components/pango_vars/include/pangolin/var/varextra.h:31, from /root/Pangolin/components/pango_vars/src/vars.cpp:28: /root/Pangolin/components/pango_vars/include/pangolin/var/varstate.h:33:15: fatal error: any: No such file or directory CMake Error at CMakeLists.txt:88 (add_library): Target "pango_windowing" links to target "Eigen3::Eigen" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED target, or an ALIAS target is missing? Solution:
cd Pangolin ./scripts/install_prerequisites.sh recommended git checkout v0.6 $$$
OpenCV Usage%%https://milaiai.github.io/blog/post/opencv/%%2021-12-14%%CMake Usage cmake_minimum_required(VERSION 2.8) project( DisplayImage ) find_package( OpenCV REQUIRED ) include_directories( ${OpenCV_INCLUDE_DIRS} ) add_executable( DisplayImage DisplayImage.cpp ) target_link_libraries( DisplayImage ${OpenCV_LIBS} ) Basic variables:
OpenCV_LIBS : The list of all imported targets for OpenCV modules. OpenCV_INCLUDE_DIRS : The OpenCV include directories. OpenCV_COMPUTE_CAPABILITIES : The version of compute capability. OpenCV_ANDROID_NATIVE_API_LEVEL : Minimum required level of Android API. OpenCV_VERSION : The version of this OpenCV build: “3.3.1” OpenCV_VERSION_MAJOR : Major version part of OpenCV_VERSION: “3” OpenCV_VERSION_MINOR : Minor version part of OpenCV_VERSION: “3” OpenCV_VERSION_PATCH : Patch version part of OpenCV_VERSION: “1” OpenCV_VERSION_STATUS : Development status of this build: "" Advanced variables:%%CMake Usage cmake_minimum_required(VERSION 2.8) project( DisplayImage ) find_package( OpenCV REQUIRED ) include_directories( ${OpenCV_INCLUDE_DIRS} ) add_executable( DisplayImage DisplayImage.cpp ) target_link_libraries( DisplayImage ${OpenCV_LIBS} ) Basic variables:
OpenCV_LIBS : The list of all imported targets for OpenCV modules. OpenCV_INCLUDE_DIRS : The OpenCV include directories. OpenCV_COMPUTE_CAPABILITIES : The version of compute capability. OpenCV_ANDROID_NATIVE_API_LEVEL : Minimum required level of Android API. OpenCV_VERSION : The version of this OpenCV build: “3.3.1” OpenCV_VERSION_MAJOR : Major version part of OpenCV_VERSION: “3” OpenCV_VERSION_MINOR : Minor version part of OpenCV_VERSION: “3” OpenCV_VERSION_PATCH : Patch version part of OpenCV_VERSION: “1” OpenCV_VERSION_STATUS : Development status of this build: "" Advanced variables:
OpenCV_SHARED : Use OpenCV as shared library OpenCV_INSTALL_PATH : OpenCV location OpenCV_LIB_COMPONENTS : Present OpenCV modules list OpenCV_USE_MANGLED_PATHS : Mangled OpenCV path flag Deprecated variables:
OpenCV_VERSION_TWEAK : Always “0” Test example #include <stdio.h> #include <opencv2/opencv.hpp> using namespace cv; int main(int argc, char** argv ) { if ( argc != 2 ) { printf("usage: DisplayImage.out <Image_Path>\n"); return -1; } Mat image; image = imread( argv[1], 1 ); if ( !image.data ) { printf("No image data \n"); return -1; } namedWindow("Display Image", WINDOW_AUTOSIZE ); imshow("Display Image", image); waitKey(0); return 0; } Possible Errors fatal error: dynlink_nvcuvid.h: No such file or directory In file included from /home/yubao/Software/opencv/build/modules/cudacodec/opencv_cudacodec_pch_dephelp.cxx:1: /home/yubao/Software/opencv/modules/cudacodec/src/precomp.hpp:60:18: fatal error: dynlink_nvcuvid.h: No such file or directory 60 | #include <dynlink_nvcuvid.h> | ^~~~~~~~~~~~~~~~~~~ compilation terminated. cat modules/cudacodec/src/precomp.hpp
#if CUDA_VERSION >= 9000 #include <dynlink_nvcuvid.h> #else #include <nvcuvid.h> #endif cuda10 does not provide dynlink_nvcuvid.h any more
OpenCV CUDA 10 安装 dynlink_nvcuvid.h 问题解决方法 error: ‘CODEC_FLAG_GLOBAL_HEADER’ was not declared in this scope /home/yubao/Software/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:1573:21: error: ‘CODEC_FLAG_GLOBAL_HEADER’ was not declared in this scope; did you mean ‘AV_CODEC_FLAG_GLOBAL_HEADER’? 1573 | c->flags |= CODEC_FLAG_GLOBAL_HEADER; | ^~~~~~~~~~~~~~~~~~~~~~~~ | AV_CODEC_FLAG_GLOBAL_HEADER 将CODEC_FLAG_GLOBAL_HEADER改为:
AV_CODEC_FLAG_GLOBAL_HEADER
vim modules/videoio/src/cap_ffmpeg_impl.hpp
#define AV_CODEC_FLAG_GLOBAL_HEADER (1 << 22) #define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER error: ‘AVFMT_RAWPICTURE’ was not declared in this scope /home/yubao/Software/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:1604:30: error: ‘AVFMT_RAWPICTURE’ was not declared in this scope 1604 | if (oc->oformat->flags & AVFMT_RAWPICTURE) { #define AVFMT_RAWPICTURE 0x0020 char* str = PyString_AsString(obj) /home/ai/yubao/opencv/modules/python/src2/cv2.cpp:652:1: warning: missing initializer for member ‘_typeobject::tp_print’ [-Wmissing-field-initializers] /home/ai/yubao/opencv/modules/python/src2/cv2.cpp: In function ‘bool pyopencv_to(PyObject*, T&, const char*) [with T = cv::String; PyObject = _object]’: /home/ai/yubao/opencv/modules/python/src2/cv2.cpp:856:34: error: invalid conversion from ‘const char*’ to ‘char*’ [-fpermissive] 856 | char* str = PyString_AsString(obj); In file included from /home/ai/yubao/opencv/modules/python/src2/cv2.cpp:1498: /home/ai/yubao/opencv/build/modules/python3/pyopencv_generated_types.h: In function ‘bool pyopencv_to(PyObject*, T&, const char*) [with T = cv::FileNode; PyObject = _object]’: /home/ai/yubao/opencv/build/modules/python3/pyopencv_generated_types.h:1343:40: warning: implicitly-declared ‘constexpr cv::FileNode& cv::FileNode::operator=(const cv::FileNode&)’ is deprecated [-Wdeprecated-copy] 1343 | dst = ((pyopencv_FileNode_t*)src)->v; Solution:
vim /home/ai/yubao/opencv/modules/python/src2/cv2.cpp
template<> bool pyopencv_to(PyObject* obj, String& value, const char* name) { (void)name; if(!obj || obj == Py_None) return true; //char* str = PyString_AsString(obj); const char* str = PyString_AsString(obj); if(!str) return false; value = String(str); return true; } blenders.cpp Error Message:
/home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp: In member function ‘virtual void cv::detail::MultiBandBlender::feed(cv::InputArray, cv::InputArray, cv::Point)’: /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp:412:39: error: ‘cv::cuda::device’ has not been declared using namespace cv::cuda::device::blend; ^~~~~~ /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp:412:47: error: ‘blend’ is not a namespace-name using namespace cv::cuda::device::blend; ^~~~~ /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp:412:52: error: expected namespace-name before ‘;’ token using namespace cv::cuda::device::blend; ^ /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp:415:17: error: ‘addSrcWeightGpu32F’ was not declared in this scope addSrcWeightGpu32F(_src_pyr_laplace, _weight_pyr_gauss, _dst_pyr_laplace, _dst_band_weights, rc); ^~~~~~~~~~~~~~~~~~ /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp:415:17: note: suggested alternative: ‘addWeighted’ addSrcWeightGpu32F(_src_pyr_laplace, _weight_pyr_gauss, _dst_pyr_laplace, _dst_band_weights, rc); ^~~~~~~~~~~~~~~~~~ addWeighted /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp:419:17: error: ‘addSrcWeightGpu16S’ was not declared in this scope addSrcWeightGpu16S(_src_pyr_laplace, _weight_pyr_gauss, _dst_pyr_laplace, _dst_band_weights, rc); ^~~~~~~~~~~~~~~~~~ /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp:419:17: note: suggested alternative: ‘addWeighted’ addSrcWeightGpu16S(_src_pyr_laplace, _weight_pyr_gauss, _dst_pyr_laplace, _dst_band_weights, rc); ^~~~~~~~~~~~~~~~~~ addWeighted /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp: In member function ‘virtual void cv::detail::MultiBandBlender::blend(cv::InputOutputArray, cv::InputOutputArray)’: /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp:554:41: error: ‘cv::cuda::device’ has not been declared using namespace ::cv::cuda::device::blend; ^~~~~~ /home/yubao/Software/opencv_3.3.1/modules/stitching/src/blenders.cpp:554:49: error: ‘blend’ is not a namespace-name using namespace ::cv::cuda::device::blend; Solution:
Comment “BUILD_CUDA_STATUS”.
CV_GRAY2RGB Error message:
VINS-Fusion/vins_estimator/src/featureTracker/feature_tracker.cpp:456:36: error: ‘CV_GRAY2RGB’ was not declared in this scope cv::cvtColor(imTrack, imTrack, CV_GRAY2RGB); ^~~~~~~~~~~ Solution:
Method1: Switch to OpenCV 3.x https://docs.opencv.org/3.0.0/df/d4e/group__imgproc__c.html #include <opencv2/imgproc/types_c.h> or #include <opencv2/imgproc/imgproc_c.h> Mehthod2: modify parameters according to higher OpenCV version References $$$
Papers With Source Code%%https://milaiai.github.io/blog/post/paperswithcode/%%2021-05-16%%Loop Closure FAB-MAP: https://github.com/arrenglover/openfabmap %%Loop Closure FAB-MAP: https://github.com/arrenglover/openfabmap $$$
Dataset%%https://milaiai.github.io/blog/post/dataset/%%2021-05-06%%网上找到的国内下载地址 Refer: https://blog.csdn.net/qq_36170626/article/details/94902166
TUM 链接:https://pan.baidu.com/s/1nwXtGqH 密码:lsgr
KITTI 链接:https://pan.baidu.com/s/1htFmXDE 密码:uu20
KITTI gt 链接:https://pan.baidu.com/s/1lX6VEhl2pPpU_3Wcp3VYdg 提取码:5ux2
DSO 链接:https://pan.baidu.com/s/1eSRmeZK 密码:6x5b
Mono 链接:https://pan.baidu.com/s/1jKaNB3C 密码:u57r
EuRoC 链接:https://pan.baidu.com/s/1miXf40o 密码:xm59
KITTI raw data:https://pan.baidu.com/s/1TyXbifoTHubu3zt4jZ90Wg 提取码: n9ys
EuRoC EuRoC_download
# Machine Hall http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_01_easy/MH_01_easy.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_02_easy/MH_02_easy.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_03_medium/MH_03_medium.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_04_difficult/MH_04_difficult.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_05_difficult/MH_05_difficult.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_01_easy/MH_01_easy.bag http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_02_easy/MH_02_easy.bag http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_03_medium/MH_03_medium.bag http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_04_difficult/MH_04_difficult.bag http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_05_difficult/MH_05_difficult.bag # Vicon Room 1 http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room1/V1_01_easy/V1_01_easy.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room1/V1_02_medium/V1_02_medium.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room1/V1_03_difficult/V1_03_difficult.zip # Vicon Room 2 http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room2/V2_01_easy/V2_01_easy.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room2/V2_02_medium/V2_02_medium.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room2/V2_03_difficult/V2_03_difficult.zip # Calibration Dataset http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/calibration_datasets/ Loop closure New College and City Centre https://www.robots.ox.ac.uk/~mobile/IJRR_2008_Dataset/data.html Papers FAB-MAP: https://www.%%网上找到的国内下载地址 Refer: https://blog.csdn.net/qq_36170626/article/details/94902166
TUM 链接:https://pan.baidu.com/s/1nwXtGqH 密码:lsgr
KITTI 链接:https://pan.baidu.com/s/1htFmXDE 密码:uu20
KITTI gt 链接:https://pan.baidu.com/s/1lX6VEhl2pPpU_3Wcp3VYdg 提取码:5ux2
DSO 链接:https://pan.baidu.com/s/1eSRmeZK 密码:6x5b
Mono 链接:https://pan.baidu.com/s/1jKaNB3C 密码:u57r
EuRoC 链接:https://pan.baidu.com/s/1miXf40o 密码:xm59
KITTI raw data:https://pan.baidu.com/s/1TyXbifoTHubu3zt4jZ90Wg 提取码: n9ys
EuRoC EuRoC_download
# Machine Hall http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_01_easy/MH_01_easy.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_02_easy/MH_02_easy.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_03_medium/MH_03_medium.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_04_difficult/MH_04_difficult.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_05_difficult/MH_05_difficult.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_01_easy/MH_01_easy.bag http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_02_easy/MH_02_easy.bag http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_03_medium/MH_03_medium.bag http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_04_difficult/MH_04_difficult.bag http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_05_difficult/MH_05_difficult.bag # Vicon Room 1 http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room1/V1_01_easy/V1_01_easy.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room1/V1_02_medium/V1_02_medium.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room1/V1_03_difficult/V1_03_difficult.zip # Vicon Room 2 http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room2/V2_01_easy/V2_01_easy.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room2/V2_02_medium/V2_02_medium.zip http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/vicon_room2/V2_03_difficult/V2_03_difficult.zip # Calibration Dataset http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/calibration_datasets/ Loop closure New College and City Centre https://www.robots.ox.ac.uk/~mobile/IJRR_2008_Dataset/data.html Papers FAB-MAP: https://www.robots.ox.ac.uk/~mobile/IJRR_2008_Dataset/ $$$
Docker Proxy Setting%%https://milaiai.github.io/blog/post/docker/%%2021-01-10%%daemon You may meet such error:
ERROR: Service 'web' failed to build: Get https://registry-1.docker.io/v2/library/python/manifests/2.7: net/http: TLS handshake timeout xxxxxxxxxx set(CMAKE_BUILD_TYPE Debug)rosrun –prefix ‘gdb -ex run –args’ [package_name] [node_name]sh
{ “registry-mirrors”:[“https://docker.mirrors.ustc.edu.cn”] } commands systemctl daemon-reload systemctl restart docker remove all images and containers docker rm $(docker ps -a -q) docker rmi $(docker images -q) Docker Proxy Sometimes we need to download the developing packages from the external Network when do the research.%%daemon You may meet such error:
ERROR: Service 'web' failed to build: Get https://registry-1.docker.io/v2/library/python/manifests/2.7: net/http: TLS handshake timeout xxxxxxxxxx set(CMAKE_BUILD_TYPE Debug)rosrun –prefix ‘gdb -ex run –args’ [package_name] [node_name]sh
{ “registry-mirrors”:[“https://docker.mirrors.ustc.edu.cn”] } commands systemctl daemon-reload systemctl restart docker remove all images and containers docker rm $(docker ps -a -q) docker rmi $(docker images -q) Docker Proxy Sometimes we need to download the developing packages from the external Network when do the research. However, I found I cannot let docker access the proxy depoloyed on my host machine especially in the build stage, such as “docker-compose build”.
Usually the proxy on the host can be used when the container is up (docker-compose up).
Successful Configuration for “docker-compose build” Check the IP address on your host. Use the IP like “192.168.1.7”, rather than “127.0.0.1”. Because “127.*” is inside the docker container. A temperary docker container is used when you build the docker image. Let the proxy server listen the IP and port, “192.168.1.7” that you set in the previous step. By default, the proxy server only can listen “127.0.0.1”. Note: this is the reason why I failed in the past. docker-compose example version: '2.3' services: proxy: image: yubaoliu/proxy build: context: . dockerfile: Dockerfile args: http_proxy: $http_proxy https_proxy: $https_proxy runtime: nvidia stdin_open: true tty: true privileged: true command: xterm network_mode: host environment: - DISPLAY - QT_X11_NO_MITSHM=1 - http_proxy=$http_proxy - https_proxy=$https_proxy dns: - 8.8.8.8 - 8.8.4.4 volumes: - /tmp/.X11-unix:/tmp/.X11-unix:rw - ~/.Xauthority:/root/.Xauthority env example http_proxy=http://192.168.1.7:41091 https_proxy=http://192.168.1.7:41091 Dockerfile example FROM golang:1.12 RUN curl www.google.com --max-time 3 How to test proxy curl www.google.com --max-time 3 How to restart docker sudo systemctl daemon-reload sudo systemctl restart docker Global config This is a global config. Not suggested to use.
vim ~/.docker/config.json
{ "proxies": { "default": { "httpProxy": "http://192.168.1.7:41091", "httpsProxy": "http://192.168.1.7:41091", "noProxy": "" } } } Set Proxy inside the Dockerfile Not suggested to use.
FROM golang:1.12 ENV http_proxy "http://192.168.1.7:1087" #ENV HTTP_PROXY "http://127.0.0.1:1087" ENV https_proxy "http://192.168.1.7:1087" #ENV HTTPS_PROXY "http://127.0.0.1:1087" RUN curl www.google.com --max-time 3 Use build-arg docker build -t anguiao/nginx-brotli . --build-arg http_proxy=http://172.21.0.9:8118 --build-arg https_proxy=http://172.21.0.9:8118 注意:在写代理地址时,不可写成 127.0.0.1 或者 localhost,应使用宿主机的 IP。我这里使用的是宿主机的内网 IP,可根据网络环境进行适当的改动。
docker.service.d mkdir /etc/systemd/system/docker.service.d
[Service] # NO_PROXY is optional and can be removed if not needed # Change proxy_url to your proxy IP or FQDN and proxy_port to your proxy port # For Proxy server which require username and password authentication, just add the proper username and password to the URL. (see example below) # Example without authentication Environment="HTTP_PROXY=http://proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8" # Example with authentication Environment="HTTP_PROXY=http://username:password@proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8" references 使用代理构建 Docker 镜像 docker build时怎么用http proxy代理? $$$
Nvidia-cuda%%https://milaiai.github.io/blog/post/nvidia-cuda/%%2020-08-14%%查看显卡计算能力 Compute Capability GeForce and TITAN Products Geforce RTX 3060 8.6
Check Nvidia version deviceQuery cd /usr/local/cuda-11.3/samples/1_Utilities/deviceQuery ./devicequery Copy to HOME folder to make if not maked before.
~./deviceQuery ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "NVIDIA GeForce RTX 3060 Laptop GPU" CUDA Driver Version / Runtime Version 11.4 / 11.3 CUDA Capability Major/Minor version number: 8.6 Total amount of global memory: 5947 MBytes (6235422720 bytes) (030) Multiprocessors, (128) CUDA Cores/MP: 3840 CUDA Cores GPU Max Clock rate: 1702 MHz (1.%%查看显卡计算能力 Compute Capability GeForce and TITAN Products Geforce RTX 3060 8.6
Check Nvidia version deviceQuery cd /usr/local/cuda-11.3/samples/1_Utilities/deviceQuery ./devicequery Copy to HOME folder to make if not maked before.
~./deviceQuery ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "NVIDIA GeForce RTX 3060 Laptop GPU" CUDA Driver Version / Runtime Version 11.4 / 11.3 CUDA Capability Major/Minor version number: 8.6 Total amount of global memory: 5947 MBytes (6235422720 bytes) (030) Multiprocessors, (128) CUDA Cores/MP: 3840 CUDA Cores GPU Max Clock rate: 1702 MHz (1.70 GHz) Memory Clock rate: 7001 Mhz Memory Bus Width: 192-bit L2 Cache Size: 3145728 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total shared memory per multiprocessor: 102400 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 1536 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device supports Managed Memory: Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.3, NumDevs = 1 Result = PASS NVIDIA X server settings lspci查看GPU型号 ~ lspci | grep -i nvidia 01:00.0 VGA compatible controller: NVIDIA Corporation Device 2560 (rev a1) 01:00.1 Audio device: NVIDIA Corporation Device 228e (rev a1) nvidia-smi Fan:显示风扇转速,数值在0到100%之间,是计算机的期望转速,如果计算机不是通过风扇冷却或者风扇坏了,显示出来就是N/A; Temp:显卡内部的温度,单位是摄氏度; Perf:表征性能状态,从P0到P12,P0表示最大性能,P12表示状态最小性能; Pwr:能耗表示; Bus-Id:涉及GPU总线的相关信息; Disp.A:是Display Active的意思,表示GPU的显示是否初始化; Memory Usage:显存的使用率; Volatile GPU-Util:浮动的GPU利用率; Compute M:计算模式;
Check driver version 查看NVIDIA驱动版本
~ cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 470.86 Tue Oct 26 21:55:45 UTC 2021 GCC version: gcc version 8.4.0 (Ubuntu 8.4.0-3ubuntu2) OR
udo dpkg --list | grep nvidia-* [sudo] password for yubao: ii libnvidia-cfg1-470:amd64 470.86-0ubuntu0.20.04.1 amd64 NVIDIA binary OpenGL/GLX configuration library ii libnvidia-common-465 470.86-0ubuntu0.20.04.1 all Transitional package for libnvidia-common-470 ii libnvidia-common-470 470.86-0ubuntu0.20.04.1 all Shared files used by the NVIDIA libraries ii libnvidia-compute-465:amd64 470.86-0ubuntu0.20.04.1 amd64 Transitional package for libnvidia-compute-470 ii libnvidia-compute-470:amd64 470.86-0ubuntu0.20.04.1 amd64 NVIDIA libcompute package ii libnvidia-compute-470:i386 470.86-0ubuntu0.20.04.1 i386 NVIDIA libcompute package ii libnvidia-container-tools 1.7.0-1 amd64 NVIDIA container runtime library (command-line tools) ii libnvidia-container1:amd64 1.7.0-1 amd64 NVIDIA container runtime library Errors F1213 06:10:43.716547 365 im2col.cu:61] Check failed: error == cudaSuccess (209 vs. 0) no kernel image is available for execution on the device *** Check failure stack trace: *** @ 0x7fe53d7a20cd google::LogMessage::Fail() @ 0x7fe53d7a3f33 google::LogMessage::SendToLog() @ 0x7fe53d7a1c28 google::LogMessage::Flush() @ 0x7fe53d7a4999 google::LogMessageFatal::~LogMessageFatal() @ 0x7fe53a9c0e95 caffe::im2col_gpu<>() @ 0x7fe53a7bfeb6 caffe::BaseConvolutionLayer<>::conv_im2col_gpu() @ 0x7fe53a7bffb6 caffe::BaseConvolutionLayer<>::forward_gpu_gemm() @ 0x7fe53a971c41 caffe::ConvolutionLayer<>::Forward_gpu() @ 0x7fe53a8e5322 caffe::Net<>::ForwardFromTo() @ 0x7fe53a8e5437 caffe::Net<>::Forward() @ 0x7fe53e1d210a Classifier::Predict() @ 0x7fe53e1c2549 segnet_ros::SegNet::SegmentImage() @ 0x7fe53e1c5088 segnet_ros::SegNet::Run() @ 0x7fe53b53ebcd (unknown) @ 0x7fe53b3156db start_thread @ 0x7fe53cf2571f clone [segnet_action_server-2] process has died [pid 351, exit code -6, cmd /root/catkin_ws/devel/lib/segnet_ros/segnet_action_server __name:=segnet_action_server __log:=/root/.ros/log/5ff90f90-5bdb-11ec-be69-e02be97a7691/segnet_action_server-2.log]. log file: /root/.ros/log/5ff90f90-5bdb-11ec-be69-e02be97a7691/segnet_action_server-2*.log Solution:
Check Your GPU Compute Capability Your GPU Compute Capability [ caffe运行错误: im2col.cu:61] Check failed: error == cudaSuccess (8 vs. 0) invalid device function](https://www.cnblogs.com/haiyang21/p/7381032.html) error == cudaSuccess (209 vs. 0) no kernel image is available for execution on the device Nvidia/Titan RTX Check failed: error == cudaSuccess (48 vs. 0) no kernel image is available for execution on the device 1290 References NVIDIA CUDA Toolkit Release Notes $$$
Agent引擎的实现%%https://milaiai.github.io/blog/post/ch3.2.5-6-agent-%E5%BC%95%E6%93%8E%E7%9A%84%E5%AE%9E%E7%8E%B0/%%2019-03-10%%Angent的实现
Overview 之前学习了状态迁移函数,并能绘制机器人。
这节的目标是实现机器人的引擎,让机器人能动起来。
笔记 ロボットの制御指令を決めるエージェントのクラスを作ります。 「考え主体」のことを、ロボチックスや人工知能の研究分野ではエージェントと呼びます。 今の段階ではただ一定自家ごとに固定値の$\nu, \omega$を返すというもとにします。 hasattrは、オブジェクトにメソッドがあるかを調べる関数です。 何秒間シミュレーションするか(time_span) と$\Delta t$ (time_interval)を指定できるようにします。 理论 机器人通过机器人来发布控制指令。 控制指令: $\nu = (\nu, \omega)^\top$ 设定仿真时长(time_span),第帧的时间间隔(time_interval) 帧数 = time_span/time_interval hasattr用来检查对象是否存在 Sample Code # -*- coding: utf-8 -*- """ch3 robot model Automatically generated by Colaboratory. Original file is located at https://colab.research.google.com/drive/1s6LUufRD3f70hqtnyt9tsTqXnEJN7QL1 """ # Commented out IPython magic to ensure Python compatibility. # %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as patches import math import numpy as np # Animation import matplotlib matplotlib.%%Angent的实现
Overview 之前学习了状态迁移函数,并能绘制机器人。
这节的目标是实现机器人的引擎,让机器人能动起来。
笔记 ロボットの制御指令を決めるエージェントのクラスを作ります。 「考え主体」のことを、ロボチックスや人工知能の研究分野ではエージェントと呼びます。 今の段階ではただ一定自家ごとに固定値の$\nu, \omega$を返すというもとにします。 hasattrは、オブジェクトにメソッドがあるかを調べる関数です。 何秒間シミュレーションするか(time_span) と$\Delta t$ (time_interval)を指定できるようにします。 理论 机器人通过机器人来发布控制指令。 控制指令: $\nu = (\nu, \omega)^\top$ 设定仿真时长(time_span),第帧的时间间隔(time_interval) 帧数 = time_span/time_interval hasattr用来检查对象是否存在 Sample Code # -*- coding: utf-8 -*- """ch3 robot model Automatically generated by Colaboratory. Original file is located at https://colab.research.google.com/drive/1s6LUufRD3f70hqtnyt9tsTqXnEJN7QL1 """ # Commented out IPython magic to ensure Python compatibility. # %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as patches import math import numpy as np # Animation import matplotlib matplotlib.use('nbagg') import matplotlib.animation as anm from matplotlib import rc """# Draw world coordinate""" class World: def __init__(self, time_span, time_interval, debug=False): self.objects = [] self.debug = debug self.time_span = time_span self.time_interval = time_interval def append(self, obj): self.objects.append(obj) def draw(self): global ani fig = plt.figure(figsize=(4, 4)) plt.close() ax = fig.add_subplot(111) ax.set_aspect('equal') ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) ax.set_xlabel("X", fontsize=20) ax.set_ylabel("Y", fontsize=20) elems = [] if self.debug: for i in range(1000): self.one_step(i, elems, ax) else: ani = anm.FuncAnimation(fig, self.one_step, fargs=(elems, ax), frames=int(self.time_span/self.time_interval)+1, interval=int(self.time_interval*1000), repeat=False ) plt.show() def one_step(self, i, elems, ax): while elems: elems.pop().remove() elems.append(ax.text(-4.4, 4.5, "t="+str(i), fontsize=10) ) for obj in self.objects: obj.draw(ax, elems) if hasattr(obj, "one_step"): obj.one_step(1.0) class Agent: def __init__(self, nu, omega): self.nu = nu self.omega = omega def decision(self, observation=None): return self.nu, self.omega """# Robot Object""" class IdealRobot: def __init__(self, pose, agent=None, color="black"): self.pose = pose self.r = 0.2 self.color = color self.agent = agent self.poses = [pose] def draw(self, ax, elems): x, y, theta = self.pose xn = x + self.r * math.cos(theta) yn = y + self.r * math.sin(theta) elems += ax.plot([x, xn], [y, yn], color=self.color) c = patches.Circle(xy=(x,y), radius=self.r, fill=False, color=self.color) elems.append(ax.add_patch(c)) self.poses.append(self.pose) elems+=ax.plot( [e[0] for e in self.poses], [e[1] for e in self.poses], linewidth=0.5, color="black") @classmethod def state_transition(cls, nu, omega, delta_t, pose): theta_t_pre = pose[2] if math.fabs(omega) < 1e-10: return pose + np.array([nu * math.cos(theta_t_pre), nu * math.sin(theta_t_pre), omega ]) * delta_t else: return pose + np.array([ nu/omega * (math.sin(theta_t_pre + omega * delta_t) - math.sin(theta_t_pre)), nu/omega * (-math.cos(theta_t_pre + omega * delta_t) + math.cos(theta_t_pre)), omega * delta_t ]) def one_step(self, time_interval): if not self.agent: return nu, omega = self.agent.decision() self.pose = self.state_transition(nu, omega, time_interval, self.pose) # Commented out IPython magic to ensure Python compatibility. # %matplotlib inline world = World(time_span = 36, time_interval = 1, debug=False) straight = Agent(0.2, 0.0) circling = Agent(0.2, 10.0/180*math.pi) robot1 = IdealRobot(np.array([1, 1, math.pi/6]).T, straight) robot2 = IdealRobot(np.array([-2, -1, math.pi/5*6]).T, circling, "red") robot3 = IdealRobot(np.array([0, 0, 0]).T, color="blue") world.append(robot1) world.append(robot2) world.append(robot3) world.draw() # this is needed to show animation whithin colab rc('animation', html='jshtml') ani # or HTML(anim.to_jshtml() $$$
Network%%https://milaiai.github.io/blog/post/network/%%2019-03-10%%DNS vim /etc/resolv.conf
nameserver 8.8.8.8 nameserver 8.8.4.4 APT $ sudo touch /etc/apt/apt.conf.d/proxy.conf $ sudo gedit /etc/apt/apt.conf.d/proxy.conf Acquire { HTTP::proxy "http://127.0.0.1:8080"; HTTPS::proxy "http://127.0.0.1:8080"; } Sftp 安装 ssh 服务端 sudo apt-get install openssh-server 显示 sshd 即可以成功连接 ps -e |grep ssh 如果不显示 sshd sudo /etc/init.d/ssh start %%DNS vim /etc/resolv.conf
nameserver 8.8.8.8 nameserver 8.8.4.4 APT $ sudo touch /etc/apt/apt.conf.d/proxy.conf $ sudo gedit /etc/apt/apt.conf.d/proxy.conf Acquire { HTTP::proxy "http://127.0.0.1:8080"; HTTPS::proxy "http://127.0.0.1:8080"; } Sftp 安装 ssh 服务端 sudo apt-get install openssh-server 显示 sshd 即可以成功连接 ps -e |grep ssh 如果不显示 sshd sudo /etc/init.d/ssh start $$$
ORB_SLAM3%%https://milaiai.github.io/blog/post/orb-slam/%%2019-03-10%%Get ORB_SLAM3 git clone https://github.com/yubaoliu/ORB_SLAM3.git cd ORB_SLAM3 git checkout dev Deploy ORB_SLAM3 Build OpenCV OpenCV will be installed when you install the ROS.
Build thirdparties
chmod +x build.sh ./build.sh 使用Realsense运行ORB_SLAM3 Start realsense camera node roslaunch realsense2_camera rs_rgbd.launch Start ORB_SLAM3 node roslaunch orb_slam3 realsense.launch %%Get ORB_SLAM3 git clone https://github.com/yubaoliu/ORB_SLAM3.git cd ORB_SLAM3 git checkout dev Deploy ORB_SLAM3 Build OpenCV OpenCV will be installed when you install the ROS.
Build thirdparties
chmod +x build.sh ./build.sh 使用Realsense运行ORB_SLAM3 Start realsense camera node roslaunch realsense2_camera rs_rgbd.launch Start ORB_SLAM3 node roslaunch orb_slam3 realsense.launch $$$
VIM%%https://milaiai.github.io/blog/post/vim/%%2019-03-10%%Debug go-vim-debugging-with-gdb Vim 调试:termdebug 入门 Debugging in Vim How to use ConqueGDB in Vim How does debugging with VIM and gdb? Termdebug :packadd termdebug Markdown HELLO
VIM 之插件篇
import cv2 echo "hello" $$ a+b - 1= c^2 $$
Command Function Ctrl + ] Go to definition Ctrl + T Jump back from the definition Ctrl + W Ctrl + ] Open the definition in a horizontal split :ts <tag_name> List the tags that match <tag_name> :tn Jump to the next matching tag :tp Jump to the previous matching tag%%Debug go-vim-debugging-with-gdb Vim 调试:termdebug 入门 Debugging in Vim How to use ConqueGDB in Vim How does debugging with VIM and gdb? Termdebug :packadd termdebug Markdown HELLO
VIM 之插件篇
import cv2 echo "hello" $$ a+b - 1= c^2 $$
Command Function Ctrl + ] Go to definition Ctrl + T Jump back from the definition Ctrl + W Ctrl + ] Open the definition in a horizontal split :ts <tag_name> List the tags that match <tag_name> :tn Jump to the next matching tag :tp Jump to the previous matching tag
Shortcuts Ctrl+] : 取出当前光标下的word作为tag的名字并进行跳转。 Ctrl+t or Ctrl + o: 跳转到前一次的tag处 Ctrl+w+]: 分割当前窗口,并且跳转到光标下的tag Ctags 查看ctags支持的语言 ctags --list-languages 查看语言和扩展名的对应关系 ctags --list-maps 查看ctags可以识别和记录的语法元素 ctags --list-kinds ctags --list-kinds=c++ 对当前目录下所有ctags支持的语言格式文件生成tags ctags -R * ctags 默认并不会提取所有标识符的tag标签,以下命令可以生成更加详细的tag文件
ctags -R --c++-kinds=+p+l+x+c+d+e+f+g+m+n+s+t+u+v --fields=+liaS --extra=+q 只对特定文件生成tags ctags `find -name "*.h"` 用于跳转到指定的tag。例如: tag tagname 列出曾经访问过的tag的列表 tags 同名tag 如果存在多个同名的tag,tag命令会给出一个tag的列表,可以通过键入tag的序号来选择tag;也可以通过tselect来过滤tag,如::tselect tagname 如果要在多个tag间移动,可以使用如下命令:
:tfirst go to first match :[count]tprevious go to [count] previous match :[count]tnext go to [count] next match :tlast go to last match 其他 如果是多个tags文件,可以通过设置tags选项来引入更多的tags文件。例如: :set tags=./tags, ./../tags, ./*/tags 使用tag命令时,可以输入部分tag名,然后使用Tab键进行补全。
References Vim自动生成tags插件vim-gutentags安装和自动跳转方法-Vim插件(10) Vim使用ctags实现函数跳转-Vim入门教程(13) $$$
强化学习%%https://milaiai.github.io/blog/post/reinforcement_learning/%%2019-03-10%%Resources https://deepreinforcementlearningbook.org/
https://github.com/deep-reinforcement-learning-book
Reinforcement Learning Book: https://www.dbooks.org/reinforcement-learning-0262039249/
仿真环境-迷宫 %%Resources https://deepreinforcementlearningbook.org/
https://github.com/deep-reinforcement-learning-book
Reinforcement Learning Book: https://www.dbooks.org/reinforcement-learning-0262039249/
仿真环境-迷宫 $$$
机器人位姿描述%%https://milaiai.github.io/blog/post/ch3.1-%E6%9C%BA%E5%99%A8%E4%BA%BA%E4%BD%8D%E5%A7%BF%E6%8F%8F%E8%BF%B0/%%2019-03-10%%Objective 绘制世界坐标系 如何描述机器人的位姿 如何绘制世界坐标系 如何绘制机器人位姿 可参考:3.2.2 ロボットの姿勢と描く
対向2輪ロボット(Differential wheeled robot) 机器人位姿 世界坐标系记为 $\Sigma_{world}$
位姿 (状态):位置和朝向 $x = (x, y, \theta)^T$
状态空间: 姿势(状态)的集合
位姿x所有可能的取值的集合$\chi$,例如平面上的长方形的范围内自由移动的机器人位姿的状态空间为:
$$ \chi = { x=(x, y, \theta)^T | x \in [x_{min}, x_{max}], y \in [y_{min}, y_{max}], \theta \in [- \pi, \pi) } $$
Source Code import matplotlib.pyplot as plt import matplotlib.patches as patches import math import numpy as np class World: def __init__(self): self.objects = [] def append(self, obj): self.%%Objective 绘制世界坐标系 如何描述机器人的位姿 如何绘制世界坐标系 如何绘制机器人位姿 可参考:3.2.2 ロボットの姿勢と描く
対向2輪ロボット(Differential wheeled robot) 机器人位姿 世界坐标系记为 $\Sigma_{world}$
位姿 (状态):位置和朝向 $x = (x, y, \theta)^T$
状态空间: 姿势(状态)的集合
位姿x所有可能的取值的集合$\chi$,例如平面上的长方形的范围内自由移动的机器人位姿的状态空间为:
$$ \chi = { x=(x, y, \theta)^T | x \in [x_{min}, x_{max}], y \in [y_{min}, y_{max}], \theta \in [- \pi, \pi) } $$
Source Code import matplotlib.pyplot as plt import matplotlib.patches as patches import math import numpy as np class World: def __init__(self): self.objects = [] def append(self, obj): self.objects.append(obj) def draw(self): fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(111) ax.set_aspect('equal') ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) ax.set_xlabel("X", fontsize=20) ax.set_ylabel("Y", fontsize=20) for obj in self.objects: obj.draw(ax) plt.show() class IdealRobot: def __init__(self, pose, color="black"): self.pose = pose //位姿 self.r = 0.2 // 半径 self.color = color // 顡色 def draw(self, ax): x, y, theta = self.pose xn = x + self.r * math.cos(theta) yn = y + self.r * math.sin(theta) ax.plot([x, xn], [y, yn], color=self.color) c = patches.Circle(xy=(x,y), radius=self.r, fill=False, color=self.color) ax.add_patch(c) world = World() robot1 = IdealRobot(np.array([2, 3, math.pi/6]).T) robot2 = IdealRobot(np.array([-2, -1, math.pi/5*6]).T, "red") world.append(robot1) world.append(robot2) world.draw() References 詳解 確率ロボティクス Pythonによる基礎アルゴリズムの実装 $$$
机器人开发环境介绍%%https://milaiai.github.io/blog/post/ch1-environment/%%2019-03-10%%机器人开发环境介绍 In this section, we will tintroduce:
the usage case of robots the development environment for simulation (Python + conda) 概率机器人详解 概率机器人详解 Homepage
课件: ryuichiueda/LNPR_SLIDES
原书代码: ryuichiueda/LNPR_BOOK_CODES
My source code: https://github.com/yubaoliu/Probabilistic-Robotics.git
Robot Introduction Soccer match:
Human support robot:
Note: you can find these videos on https://space.bilibili.com/52620240 too.
Environment Deployment (optional) Anyconda or other virtual Python environment Jupyter notebook You can refer https://www.ybliu.com/2021/01/OpenCV-Python-Development.html to deploy a conda-based development environment.
Test Environment Run jupyter notebook jupyter notebook Add vitual env to notebook: conda install -c anaconda ipykernel python -m ipykernel install --user --name=robotics jupyter notebook Draw world coordinate Source code:%%机器人开发环境介绍 In this section, we will tintroduce:
the usage case of robots the development environment for simulation (Python + conda) 概率机器人详解 概率机器人详解 Homepage
课件: ryuichiueda/LNPR_SLIDES
原书代码: ryuichiueda/LNPR_BOOK_CODES
My source code: https://github.com/yubaoliu/Probabilistic-Robotics.git
Robot Introduction Soccer match:
Human support robot:
Note: you can find these videos on https://space.bilibili.com/52620240 too.
Environment Deployment (optional) Anyconda or other virtual Python environment Jupyter notebook You can refer https://www.ybliu.com/2021/01/OpenCV-Python-Development.html to deploy a conda-based development environment.
Test Environment Run jupyter notebook jupyter notebook Add vitual env to notebook: conda install -c anaconda ipykernel python -m ipykernel install --user --name=robotics jupyter notebook Draw world coordinate Source code:
import matplotlib.pyplot as plt class World: def __init__(self): pass def draw(self): fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(111) ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) ax.set_xlabel("X", fontsize=20) ax.set_ylabel("Y", fontsize=20) plt.show() world = World() world.draw() $$$
机器人概率基础%%https://milaiai.github.io/blog/post/ch2-probabilistics/%%2019-03-10%%平均値 $$\mu = \frac{1}{N}\sum_{i=0}^{N-1} z_i$$
$z_0, z_1, \dots, z_{N-1}$: センサ値 $N$: センサ値の個数 分散、標準偏差 $$\sigma^2 = \frac{1}{N-1}\sum_{i=0}^{N-1} (z_i - \mu)^2 \quad (N>1)$$
(素朴な)確率分布 ここでやりたいこと: 度数分布から、 未来にどんなセンサ値が得られそうかを予想
ただし、集める個数によって値が変わってはいけないので度数分布を頻度でなく割合に * $P_{\textbf{z}\text{LiDAR}}(z) = N_z / N$ ($N_z$: センサの値が$z$だった頻度) * 全センサ値の種類に関して$P_{\textbf{z}\text{LiDAR}}(z)$を足し合わせると1に $P_{\textbf{z}\text{LiDAR}}(z)$を確率と呼びましょう Samples draw:
$$ z \sim P_{\textbf{z}\text{LiDAR}} $$
Probabilistic Model ガウス分布の当てはめ
连续的情况 $$ p(z | \mu, \sigma^2 ) = \frac{1}{\sqrt{2\pi}\sigma} e^{ - \frac{(z - \mu)^2}{2\sigma^2}} $$
$$ p(x | \mu, \sigma^2 ) $$%%
平均値 $$\mu = \frac{1}{N}\sum_{i=0}^{N-1} z_i$$
$z_0, z_1, \dots, z_{N-1}$: センサ値 $N$: センサ値の個数 分散、標準偏差 $$\sigma^2 = \frac{1}{N-1}\sum_{i=0}^{N-1} (z_i - \mu)^2 \quad (N>1)$$
(素朴な)確率分布 ここでやりたいこと: 度数分布から、 未来にどんなセンサ値が得られそうかを予想
ただし、集める個数によって値が変わってはいけないので度数分布を頻度でなく割合に * $P_{\textbf{z}\text{LiDAR}}(z) = N_z / N$ ($N_z$: センサの値が$z$だった頻度) * 全センサ値の種類に関して$P_{\textbf{z}\text{LiDAR}}(z)$を足し合わせると1に $P_{\textbf{z}\text{LiDAR}}(z)$を確率と呼びましょう Samples draw:
$$ z \sim P_{\textbf{z}\text{LiDAR}} $$
Probabilistic Model ガウス分布の当てはめ
连续的情况 $$ p(z | \mu, \sigma^2 ) = \frac{1}{\sqrt{2\pi}\sigma} e^{ - \frac{(z - \mu)^2}{2\sigma^2}} $$
$$ p(x | \mu, \sigma^2 ) $$
$\mu$: 平均値、$\sigma$: 標準偏差 確率密度関数 Probability Density Function, PDF ガウス分布からの確率の求め方: $p(x | \mu, \sigma^2 )$を積分 $p$の値を密度と言う 密度を積分すると確率に(体積と同じ) 例 センサの値が$210$より小さい確率: $P(z < 210) = \int_{-\infty}^{210} p(z | \mu, \sigma^2 ) dz$ センサの値が$210$: $P(z = 210) = \int_{-209.5}^{210.5} p(z | \mu, \sigma^2 ) dz$
密度を返す関数$p$: 確率密度関数 $p$の形状や$p$そのものも確率分布と呼ぶことがある ガウス分布は特に$\mathcal{N}$と表記される $$ \mathcal{N}(z | \mu, \sigma^2 ), \mathcal{N}(\mu, \sigma^2) $$
などと表記
from scipy.stats import norm zs = range(190, 230) ys = [ norm.pdf(z, mu, stddev) for z in zs ] plt.plot(zs, ys) plt.show() 累積分布関数 Cumulative Distribution Function, CDF $P(z < a) = \int_{-\infty}^a p(z) dz$
を累積分布関数と呼ぶ 右図 $P(a \le z < b) = P(z < b) - P(z < a)$
期待値 期待値: 無限にセンサ値をドローしたときの平均値
$\langle z \rangle_{P(z)}$、$\langle z \rangle_{p(z)}$と表現
$$ \langle z \rangle_{P(z)} = \sum_{-\infty}^{\infty} zP(z) $$
一般化した期待値 $z$が$p(z)$に従うとき、$f(z)$の値はどうなる? $$ \langle f(z) \rangle_{p(z)} = \int_{-\infty}^{\infty} f(z)p(z) dz $$
期待値の性質
線形性 $\big\langle f(z) + \alpha g(z) \big\rangle_{p(z)} = \big\langle f(z) \big\rangle_{p(z)} + \alpha \big\langle g(z) \big\rangle_{p(z)}$ $\big\langle f(z) + \alpha \big\rangle_{p(z)} = \big\langle f(z) \big\rangle_{p(z)} + \alpha \big\langle 1 \big\rangle_{p(z)} = \big\langle f(z) \big\rangle_{p(z)} + \alpha$
平均値 $\langle z \rangle_{p(z)} = \mu$、$\langle z - \mu \rangle_{p(z)} = 0$
分散 $\langle (z - \mu)^2 \rangle_{p(z)} = \sigma^2$ $$$
深度学习%%https://milaiai.github.io/blog/post/deeplearning/%%2019-03-10%%Environment Setup !pip install numpy scipy matplotlib ipython scikit-learn pandas pillow Introduction to Artificial Neural Network Activation Function Step function import numpy as np import matplotlib.pylab as plt def step_function(x): return np.array(x>0, dtype=np.int) x = np.arange(-5.0, 5.0, 0.1) y = step_function(x) plt.plot(x, y) plt.ylim(-0.1, 1.1) plt.show() Sigmoid Function import numpy as np import matplotlib.pylab as plt def sigmoid(x): return 1 / (1 + np.exp(-x)) # x = np.array([-1.0, 1.0, 2.0]) # print(y) x = np.%%Environment Setup !pip install numpy scipy matplotlib ipython scikit-learn pandas pillow Introduction to Artificial Neural Network Activation Function Step function import numpy as np import matplotlib.pylab as plt def step_function(x): return np.array(x>0, dtype=np.int) x = np.arange(-5.0, 5.0, 0.1) y = step_function(x) plt.plot(x, y) plt.ylim(-0.1, 1.1) plt.show() Sigmoid Function import numpy as np import matplotlib.pylab as plt def sigmoid(x): return 1 / (1 + np.exp(-x)) # x = np.array([-1.0, 1.0, 2.0]) # print(y) x = np.arange(-5.0, 5.0, 0.1) y = sigmoid(x) plt.plot(x, y) plt.ylim(-0.1, 1.1) plt.show() Relu Function $$ f(x)=max(0,x) $$
import numpy as np import matplotlib.pylab as plt def relu(x): return np.maximum(0, x) # x = np.array([-5.0, 5.0, 0.1]) # print(y) x = np.arange(-6.0, 6.0, 0.1) y = relu(x) plt.plot(x, y) plt.ylim(-1, 6) plt.show() 损失函数 平方和误差 Sum of squared error $$ E = \frac{1}{2} \sum_{k} (y_k -t_k)^2 $$
Cross Entropy error $$ E = - \sum_k t_k log\ y_k $$
References What is Rectified Linear Unit (ReLU)? | Introduction to ReLU Activation Function
Machine Learning Glossary
$$$
用动画来绘制Robot仿真环境%%https://milaiai.github.io/blog/post/ch3.2.3-%E6%9C%BA%E5%99%A8%E4%BA%BA%E4%BD%8D%E5%A7%BF%E5%8A%A8%E7%94%BB%E4%BB%BF%E7%9C%9F%E7%8E%AF%E5%A2%83/%%2019-03-10%%Objective 用动画来绘制Robot仿真环境 重要函数 matplotlib.animation.FuncAnimation class matplotlib.animation.FuncAnimation(fig, func, frames=None, init_func=None, fargs=None, save_count=None, *, cache_frame_data=True, **kwargs)[source] intervalnumber, optional Delay between frames in milliseconds. Defaults to 200. frames iterable, int, generator function, or None, optional fargstuple or None, optional Additional arguments to pass to each call to func. Refer https://matplotlib.org/api/_as_gen/matplotlib.animation.FuncAnimation.html for detail.
matplotlib.pyplot.plo https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html#matplotlib.pyplot.plot matplotlib.pyplot.plot(*args, scalex=True, scaley=True, data=None, **kwargs) 注意其返回值为: lines A list of Line2D objects representing the plotted data. 是一个列表对象。
笔记 one_step の引数は、ステップの番号iと,描くする図形のリストelems,サブプロットaxです。 anm.%%Objective 用动画来绘制Robot仿真环境 重要函数 matplotlib.animation.FuncAnimation class matplotlib.animation.FuncAnimation(fig, func, frames=None, init_func=None, fargs=None, save_count=None, *, cache_frame_data=True, **kwargs)[source] intervalnumber, optional Delay between frames in milliseconds. Defaults to 200. frames iterable, int, generator function, or None, optional fargstuple or None, optional Additional arguments to pass to each call to func. Refer https://matplotlib.org/api/_as_gen/matplotlib.animation.FuncAnimation.html for detail.
matplotlib.pyplot.plo https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html#matplotlib.pyplot.plot matplotlib.pyplot.plot(*args, scalex=True, scaley=True, data=None, **kwargs) 注意其返回值为: lines A list of Line2D objects representing the plotted data. 是一个列表对象。
笔记 one_step の引数は、ステップの番号iと,描くする図形のリストelems,サブプロットaxです。 anm.FuncAnimationに渡している引数は,順に図のオブジェクトのfig, 1ステップ時刻をすすめるメソッドone_step、one_stepに渡している引数、描くする総ステップ数frame、ステップの周期interval (単位ms), 繰り返し再生するかどうかのフラグrepeatです。 elems += ax.plot([x, xn], [y, yn], color=self.color) ここで、appendでなくリスト同士の足しさんになっているのは、ax.plotがリストを返してくるからです。ax.plotの返すリストのオブジェクトリストは、matplotlib.lines.Line2Dという型を持っています。 ax.add_patch(c) は matplotlib.patches.Circleという型のオブジェクトを単体で返してきますので、これはappendします。
今のシミュレーションでは一秒ごとにコマを書き換えしました。あるコマの時刻をt、次のコマの時刻をt+1などと表記します。 这里是用的离散的时间表示的,与实际是不同的。 Examples %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as patches import math import numpy as np # Animation import matplotlib matplotlib.use('nbagg') import matplotlib.animation as anm from matplotlib import rc %matplotlib inline class World: def __init__(self, debug=False): self.objects = [] self.debug = debug def append(self, obj): self.objects.append(obj) def draw(self): global ani fig = plt.figure(figsize=(4, 4)) plt.close() ax = fig.add_subplot(111) ax.set_aspect('equal') ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) ax.set_xlabel("X", fontsize=20) ax.set_ylabel("Y", fontsize=20) elems = [] if self.debug: for i in range(1000): self.one_step(i, elems, ax) else: ani = anm.FuncAnimation(fig, self.one_step, fargs=(elems, ax), frames=10, interval=1000, repeat=False ) plt.show() def one_step(self, i, elems, ax): while elems: elems.pop().remove() elems.append(ax.text(-4.4, 4.5, "t="+str(i), fontsize=10) ) for obj in self.objects: obj.draw(ax, elems) class IdealRobot: def __init__(self, pose, color="black"): self.pose = pose self.r = 0.2 self.color = color def draw(self, ax, elems): x, y, theta = self.pose xn = x + self.r * math.cos(theta) yn = y + self.r * math.sin(theta) elems += ax.plot([x, xn], [y, yn], color=self.color) c = patches.Circle(xy=(x,y), radius=self.r, fill=False, color=self.color) elems.append(ax.add_patch(c)) %matplotlib inline world = World(debug=False) robot1 = IdealRobot(np.array([2, 3, math.pi/6]).T) robot2 = IdealRobot(np.array([-2, -1, math.pi/5*6]).T, "red") world.append(robot1) world.append(robot2) world.draw() # this is needed to show animation whithin colab rc('animation', html='jshtml') ani # or HTML(anim.to_jshtml() $$$
算法-动态规划%%https://milaiai.github.io/blog/post/%E7%AE%97%E6%B3%95-%E5%8A%A8%E6%80%81%E8%A7%84%E5%88%92/%%2019-03-10%%在线测试 https://www.luogu.com.cn/ https://onlinejudge.org/ https://leetcode-cn.com/ 动态规划 斐波那契数 斐波那契数,通常用 F(n) 表示,形成的序列称为 斐波那契数列 。该数列由 0 和 1 开始,后面的每一项数字都是前面两项数字的和。也就是:
F(0) = 0,F(1) = 1 F(n) = F(n - 1) + F(n - 2),其中 n > 1 给你 n ,请计算 F(n) 。
来源:力扣(LeetCode) 链接:https://leetcode-cn.com/problems/fibonacci-number
示例:
输入:4 输出:3 解释:F(4) = F(3) + F(2) = 2 + 1 = 3 示例代码:
int fib(int n) { int F[n+1]; F[0] = 0; if (n <= 0) return F[0]; F[1] = 1; if (n == 1) return F[1]; for (int i = 2; i < n+1; i++) { F[i] = F[i - 1] + F[i - 2]; } return F[n]; } 爬楼梯 假设你正在爬楼梯。需要 n 阶你才能到达楼顶。%%在线测试 https://www.luogu.com.cn/ https://onlinejudge.org/ https://leetcode-cn.com/ 动态规划 斐波那契数 斐波那契数,通常用 F(n) 表示,形成的序列称为 斐波那契数列 。该数列由 0 和 1 开始,后面的每一项数字都是前面两项数字的和。也就是:
F(0) = 0,F(1) = 1 F(n) = F(n - 1) + F(n - 2),其中 n > 1 给你 n ,请计算 F(n) 。
来源:力扣(LeetCode) 链接:https://leetcode-cn.com/problems/fibonacci-number
示例:
输入:4 输出:3 解释:F(4) = F(3) + F(2) = 2 + 1 = 3 示例代码:
int fib(int n) { int F[n+1]; F[0] = 0; if (n <= 0) return F[0]; F[1] = 1; if (n == 1) return F[1]; for (int i = 2; i < n+1; i++) { F[i] = F[i - 1] + F[i - 2]; } return F[n]; } 爬楼梯 假设你正在爬楼梯。需要 n 阶你才能到达楼顶。
每次你可以爬 1 或 2 个台阶。你有多少种不同的方法可以爬到楼顶呢?
int climbStairs(int n){ int S[n+1]; S[1] = 1; if (n == 1) return S[1]; S[2] = 2; if (n == 2) return S[2]; for (int i = 3; i <= n; i++) { S[i] = S[i - 1] + S[i - 2]; } return S[n]; } 使用最小花费爬楼梯 数组的每个下标作为一个阶梯,第 i 个阶梯对应着一个非负数的体力花费值 cost[i](下标从 0 开始)。
每当你爬上一个阶梯你都要花费对应的体力值,一旦支付了相应的体力值,你就可以选择向上爬一个阶梯或者爬两个阶梯。
请你找出达到楼层顶部的最低花费。在开始时,你可以选择从下标为 0 或 1 的元素作为初始阶梯。
int min(int a, int b) { return a>b?b:a; } int minCostClimbingStairs(int* cost, int costSize){ int C[costSize + 1]; C[0] = cost[0]; if (costSize == 1) return C[0]; C[1] = min(C[0] + cost[1], cost[1]); if (costSize == 2) return min(C[0], C[1]); for (int i = 2; i < costSize; i++) { C[i] = min(C[i - 1], C[i - 2]) + cost[i]; } return min(C[costSize - 1], C[costSize - 2]); } $$$
算法-最短路径%%https://milaiai.github.io/blog/post/%E7%AE%97%E6%B3%95-%E6%9C%80%E7%9F%AD%E8%B7%AF%E5%BE%84/%%2019-03-10%%最短路径 Dijkstra 算法 基于贪心的单源最短路算法,其要求图中的边全部非负。
Dijkstra’s shortest path algorithm
戴克斯特拉算法-wiki
算法描述 procedure Dijkstra(G:边全为正权的图) 2 {G带有顶点 $a=v_{0},v_{1},v_{2}…$}和若干边 $w(v_{i},v_{j})$ 3 for i:=1 to n 4 $D(v_{i}):=\infty $ 5 D(a):=0 6 $S:=\emptyset$ 7 while $z\notin S$ 8 begin 9 u:=不属于S的D(u)最小的一个顶点 10 $S:=S\cup {u}$ 11 for 所有不属于S的顶点v 12 if D(u)+w(u,v)<D(v) then D(v):=D(u)+w(u,v) 13 end{D(z)=从a到z的最短路长度}
使用优先队列
1 function Dijkstra(G, w, s) 2 INITIALIZE-SINGLE-SOURCE(G, s) //实际上的操作是将每个除原点外的顶点的d[v]置为无穷大,d[s]=0 3 $S\leftarrow \emptyset$ 4 $Q\leftarrow s$ // Q是顶点V的一个优先队列,以顶点的最短路径估计排序 5 while( $Q\not =\emptyset $) 6 do $u\leftarrow EXTRACT-MIN(Q)$ //选取u为Q中最短路径估计最小的顶点 7 $S\leftarrow S\cup u$ 8 for each vertex $v \in Adj[u]$ 9 do RELAX(u, v, w) //松弛成功的结点会被加入到队列中%%最短路径 Dijkstra 算法 基于贪心的单源最短路算法,其要求图中的边全部非负。
Dijkstra’s shortest path algorithm
戴克斯特拉算法-wiki
算法描述 procedure Dijkstra(G:边全为正权的图) 2 {G带有顶点 $a=v_{0},v_{1},v_{2}…$}和若干边 $w(v_{i},v_{j})$ 3 for i:=1 to n 4 $D(v_{i}):=\infty $ 5 D(a):=0 6 $S:=\emptyset$ 7 while $z\notin S$ 8 begin 9 u:=不属于S的D(u)最小的一个顶点 10 $S:=S\cup {u}$ 11 for 所有不属于S的顶点v 12 if D(u)+w(u,v)<D(v) then D(v):=D(u)+w(u,v) 13 end{D(z)=从a到z的最短路长度}
使用优先队列
1 function Dijkstra(G, w, s) 2 INITIALIZE-SINGLE-SOURCE(G, s) //实际上的操作是将每个除原点外的顶点的d[v]置为无穷大,d[s]=0 3 $S\leftarrow \emptyset$ 4 $Q\leftarrow s$ // Q是顶点V的一个优先队列,以顶点的最短路径估计排序 5 while( $Q\not =\emptyset $) 6 do $u\leftarrow EXTRACT-MIN(Q)$ //选取u为Q中最短路径估计最小的顶点 7 $S\leftarrow S\cup u$ 8 for each vertex $v \in Adj[u]$ 9 do RELAX(u, v, w) //松弛成功的结点会被加入到队列中
http://codeforces.com/blog/entry/16221 :
Pseudo code :
dijkstra(v) : d[i] = inf for each vertex i d[v] = 0 s = new empty set while s.size() < n x = inf u = -1 for each i in V-s //V is the set of vertices if x >= d[i] then x = d[i], u = i insert u into s // The process from now is called Relaxing for each i in adj[u] d[i] = min(d[i], d[u] + w(u,i)) int mark[MAXN]; void dijkstra(int v){ fill(d,d + n, inf); fill(mark, mark + n, false); d[v] = 0; int u; while(true){ int x = inf; u = -1; for(int i = 0;i < n;i ++) if(!mark[i] and x >= d[i]) x = d[i], u = i; if(u == -1) break; mark[u] = true; for(auto p : adj[u]) //adj[v][i] = pair(vertex, weight) if(d[p.first] > d[u] + p.second) d[p.first] = d[u] + p.second; } } Two) Using std :: set : void dijkstra(int v){ fill(d,d + n, inf); d[v] = 0; int u; set<pair<int,int> > s; s.insert({d[v], v}); while(!s.empty()){ u = s.begin() -> second; s.erase(s.begin()); for(auto p : adj[u]) //adj[v][i] = pair(vertex, weight) if(d[p.first] > d[u] + p.second){ s.erase({d[p.first], p.first}); d[p.first] = d[u] + p.second; s.insert({d[p.first], p.first}); } } } Using std :: priority_queue (better): bool mark[MAXN]; void dijkstra(int v){ fill(d,d + n, inf); fill(mark, mark + n, false); d[v] = 0; int u; priority_queue<pair<int,int>,vector<pair<int,int> >, greater<pair<int,int> > > pq; pq.push({d[v], v}); while(!pq.empty()){ u = pq.top().second; pq.pop(); if(mark[u]) continue; mark[u] = true; for(auto p : adj[u]) //adj[v][i] = pair(vertex, weight) if(d[p.first] > d[u] + p.second){ d[p.first] = d[u] + p.second; pq.push({d[p.first], p.first}); } } } Problem: ShortestPath Query
Implement 1 // A C++ program for Dijkstra's single source shortest path algorithm. // The program is for adjacency matrix representation of the graph #include <iostream> using namespace std; #include <limits.h> // Number of vertices in the graph #define V 9 // A utility function to find the vertex with minimum distance value, from // the set of vertices not yet included in shortest path tree int minDistance(int dist[], bool sptSet[]) { // Initialize min value int min = INT_MAX, min_index; for (int v = 0; v < V; v++) if (sptSet[v] == false && dist[v] <= min) min = dist[v], min_index = v; return min_index; } // A utility function to print the constructed distance array void printSolution(int dist[]) { cout <<"Vertex \t Distance from Source" << endl; for (int i = 0; i < V; i++) cout << i << " \t\t"<<dist[i]<< endl; } // Function that implements Dijkstra's single source shortest path algorithm // for a graph represented using adjacency matrix representation void dijkstra(int graph[V][V], int src) { int dist[V]; // The output array. dist[i] will hold the shortest // distance from src to i bool sptSet[V]; // sptSet[i] will be true if vertex i is included in shortest // path tree or shortest distance from src to i is finalized // Initialize all distances as INFINITE and stpSet[] as false for (int i = 0; i < V; i++) dist[i] = INT_MAX, sptSet[i] = false; // Distance of source vertex from itself is always 0 dist[src] = 0; // Find shortest path for all vertices for (int count = 0; count < V - 1; count++) { // Pick the minimum distance vertex from the set of vertices not // yet processed. u is always equal to src in the first iteration. int u = minDistance(dist, sptSet); // Mark the picked vertex as processed sptSet[u] = true; // Update dist value of the adjacent vertices of the picked vertex. for (int v = 0; v < V; v++) // Update dist[v] only if is not in sptSet, there is an edge from // u to v, and total weight of path from src to v through u is // smaller than current value of dist[v] if (!sptSet[v] && graph[u][v] && dist[u] != INT_MAX && dist[u] + graph[u][v] < dist[v]) dist[v] = dist[u] + graph[u][v]; } // print the constructed distance array printSolution(dist); } // driver program to test above function int main() { /* Let us create the example graph discussed above */ int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 }, { 4, 0, 8, 0, 0, 0, 0, 11, 0 }, { 0, 8, 0, 7, 0, 4, 0, 0, 2 }, { 0, 0, 7, 0, 9, 14, 0, 0, 0 }, { 0, 0, 0, 9, 0, 10, 0, 0, 0 }, { 0, 0, 4, 14, 10, 0, 2, 0, 0 }, { 0, 0, 0, 0, 0, 2, 0, 1, 6 }, { 8, 11, 0, 0, 0, 0, 1, 0, 7 }, { 0, 0, 2, 0, 0, 0, 6, 7, 0 } }; dijkstra(graph, 0); return 0; } // This code is contributed by shivanisinghss2110 Implement 2: priority_queue priority_queue 模板有 3 个参数,其中两个有默认的参数;第一个参数是存储对象的类型,第二个参数是存储元素的底层容器,第三个参数是函数对象,它定义了一个用来决定元素顺序的断言。因此模板类型是:
template <typename T, typename Container=std::vector<T>, typename Compare=std::less<T>> class priority_queue 如你所见,priority_queue 实例默认有一个 vector 容器。函数对象类型 less 是一个默认的排序断言,定义在头文件 function 中,决定了容器中最大的元素会排在队列前面。fonction 中定义了 greater,用来作为模板的最后一个参数对元素排序,最小元素会排在队列前面。当然,如果指定模板的最巵一个参数,就必须提供另外的两个模板类型参数。
#include<bits/stdc++.h> using namespace std; # define INF 0x3f3f3f3f // iPair ==> Integer Pair(整数对) typedef pair<int, int> iPair; // 加边 void addEdge(vector <pair<int, int> > adj[], int u, int v, int wt) { adj[u].push_back(make_pair(v, wt)); adj[v].push_back(make_pair(u, wt)); } // 计算最短路 void shortestPath(vector<pair<int,int> > adj[], int V, int src) { // 关于stl中的优先队列如何实现,参考下方网址: // http://geeksquiz.com/implement-min-heap-using-stl/ priority_queue< iPair, vector <iPair> , greater<iPair> > pq; // 距离置为正无穷大 vector<int> dist(V, INF); vector<bool> visited(V, false); // 插入源点,距离为0 pq.push(make_pair(0, src)); dist[src] = 0; /* 循环直到优先队列为空 */ while (!pq.empty()) { // 每次从优先队列中取出顶点事实上是这一轮最短路径权值确定的点 int u = pq.top().second; pq.pop(); if (visited[u]) { continue; } visited[u] = true; // 遍历所有边 for (auto x : adj[u]) { // 得到顶点边号以及边权 int v = x.first; int weight = x.second; //可以松弛 if (dist[v] > dist[u] + weight) { // 松弛 dist[v] = dist[u] + weight; pq.push(make_pair(dist[v], v)); } } } // 打印最短路 printf("Vertex Distance from Source\n"); for (int i = 0; i < V; ++i) printf("%d \t\t %d\n", i, dist[i]); } int main() { int V = 9; vector<iPair > adj[V]; addEdge(adj, 0, 1, 4); addEdge(adj, 0, 7, 8); addEdge(adj, 1, 2, 8); addEdge(adj, 1, 7, 11); addEdge(adj, 2, 3, 7); addEdge(adj, 2, 8, 2); addEdge(adj, 2, 5, 4); addEdge(adj, 3, 4, 9); addEdge(adj, 3, 5, 14); addEdge(adj, 4, 5, 10); addEdge(adj, 5, 6, 2); addEdge(adj, 6, 7, 1); addEdge(adj, 6, 8, 6); addEdge(adj, 7, 8, 7); shortestPath(adj, V, 0); return 0; } P4779 单源最短路径 给定一个 n 个点,m 条有向边的带非负权图,请你计算从 s 出发,到每个点的距离。
数据保证你能从 s 出发到任意点。
Floyd 是一种基于动态规划的多源最短路算法
Floyd-Warshal() d[v][u] = inf for each pair (v,u) d[v][v] = 0 for each vertex v for k = 1 to n for i = 1 to n for j = 1 to n d[i][j] = min(d[i][j], d[i][k] + d[k][j]) Time complexity : O(n3).
Bellman-Ford 不仅可以处理负权边,还能处理负环
对所有的点进行V-1次松弛操作,理论上就找到了从源点到其他所有点的最短路径.
如果还可以继续松弛, 说明原图中有环.
其优于迪科斯彻算法的方面是边的权值可以为负数、实现简单,缺点是时间复杂度过高,高达O(|V||E|)
贝尔曼-福特算法简单地对所有边进行松弛操作,共|V|-1次,其中|V|是图的点的数量
procedure BellmanFord(list vertices, list edges, vertex source) // 讀入邊和節點的列表並對distance和predecessor寫入最短路徑 // 初始化圖 for each vertex v in vertices: if v is source then distance[v] := 0 else distance[v] := infinity predecessor[v] := null // 對每一條邊重複操作 for i from 1 to size(vertices)-1: for each edge (u, v) with weight w in edges: if distance[u] + w < distance[v]: distance[v] := distance[u] + w predecessor[v] := u // 檢查是否有負權重的回路 for each edge (u, v) with weight w in edges: if distance[u] + w < distance[v]: error "圖包含負權重的回路" http://codeforces.com/blog/entry/16221 :
Bellman-Ford(int v) d[i] = inf for each vertex i d[v] = 0 for step = 1 to n for all edges like e i = e.first // first end j = e.second // second end w = e.weight if d[j] > d[i] + w if step == n then return "Negative cycle found" d[j] = d[i] + w Time complexity : O(nm).
SPFA (Shortest Path Faster Algorithm) https://zh.wikipedia.org/wiki/%E6%9C%80%E7%9F%AD%E8%B7%AF%E5%BE%84%E5%BF%AB%E9%80%9F%E7%AE%97%E6%B3%95 国际上一般认为是队列优化的Bellman-Ford 算法
这里的是一个备选节点的先进先出队列, 是边的权值。
procedure Shortest-Path-Faster-Algorithm(G, s) for each vertex v ≠ s in V(G) d(v) := ∞ d(s) := 0 offer s into Q while Q is not empty u := poll Q for each edge (u, v) in E(G) if d(u) + w(u, v) < d(v) then d(v) := d(u) + w(u, v) if v is not in Q then offer v into Q http://codeforces.com/blog/entry/16221 :
SPFA(v): d[i] = inf for each vertex i d[v] = 0 queue q q.push(v) while q is not empty u = q.front() q.pop() for each i in adj[u] if d[i] > d[u] + w(u,i) then d[i] = d[u] + w(u,i) if i is not in q then q.push(i) References Algorithm Gym :: Graph Algorithms
http://codeforces.com/
$$$
绘制Landmark%%https://milaiai.github.io/blog/post/ch3.3.1-%E7%BB%98%E5%88%B6landmark/%%2019-03-10%%绘制地图点
Overview 概率机器人详解 (Python) 3.3.1 点ランドマークの設置
本文将介绍:
Landmark 是什么 如何绘制Landmark 实现Landmark 类与Map类的框架 理论 地标: $m = { m_j|j=0, 1,2,…, N_m-1 }$ 总共 $N_m$个。 地图:记录所有地标的位置。 地标 $m_j$: 在世界坐标系下的座标表示为: $m_j = ( m_{j,x}, m_{j,y} )$. 关键代码 Landmark class:
class Landmark: def __init__(self, x, y): self.pos = np.array([x, y]).T self.id = None def draw(self, ax, elems): c = ax.scatter(self.pos[0], self.pos[1], s=100, marker="*", label="landmarks", color= "orange") elems.append(c) elems.append(ax.text(self.pos[0], self.pos[1], "id:" + str(self.id), fontsize=10)) Map class:%%绘制地图点
Overview 概率机器人详解 (Python) 3.3.1 点ランドマークの設置
本文将介绍:
Landmark 是什么 如何绘制Landmark 实现Landmark 类与Map类的框架 理论 地标: $m = { m_j|j=0, 1,2,…, N_m-1 }$ 总共 $N_m$个。 地图:记录所有地标的位置。 地标 $m_j$: 在世界坐标系下的座标表示为: $m_j = ( m_{j,x}, m_{j,y} )$. 关键代码 Landmark class:
class Landmark: def __init__(self, x, y): self.pos = np.array([x, y]).T self.id = None def draw(self, ax, elems): c = ax.scatter(self.pos[0], self.pos[1], s=100, marker="*", label="landmarks", color= "orange") elems.append(c) elems.append(ax.text(self.pos[0], self.pos[1], "id:" + str(self.id), fontsize=10)) Map class:
class Map: def __init__(self): self.landmarks = [] def append_landmark(self, landmark): landmark.id = len(self.landmarks) self.landmarks.append(landmark) def draw(self, ax, elems): for lm in self.landmarks: lm.draw(ax, elems) 注释 使用List来存放Landmark 这里有一个技巧,使用list的长度来作为Landmark 的ID landmark.id = len(self.landmarks) Full sample code # -*- coding: utf-8 -*- """ch3.3.1 robot model Automatically generated by Colaboratory. Original file is located at https://colab.research.google.com/drive/1MhN_M2QWqelAvr4TGhM_-QewTDPkamYy """ # Commented out IPython magic to ensure Python compatibility. # %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as patches import math import numpy as np # Animation import matplotlib matplotlib.use('nbagg') import matplotlib.animation as anm from matplotlib import rc """# Draw world coordinate""" class World: def __init__(self, time_span, time_interval, debug=False): self.objects = [] self.debug = debug self.time_span = time_span self.time_interval = time_interval def append(self, obj): self.objects.append(obj) def draw(self): global ani fig = plt.figure(figsize=(4, 4)) plt.close() ax = fig.add_subplot(111) ax.set_aspect('equal') ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) ax.set_xlabel("X", fontsize=20) ax.set_ylabel("Y", fontsize=20) elems = [] if self.debug: for i in range(1000): self.one_step(i, elems, ax) else: ani = anm.FuncAnimation(fig, self.one_step, fargs=(elems, ax), frames=int(self.time_span/self.time_interval)+1, interval=int(self.time_interval*1000), repeat=False ) plt.show() def one_step(self, i, elems, ax): while elems: elems.pop().remove() elems.append(ax.text(-4.4, 4.5, "t="+str(i), fontsize=10) ) for obj in self.objects: obj.draw(ax, elems) if hasattr(obj, "one_step"): obj.one_step(1.0) class Agent: def __init__(self, nu, omega): self.nu = nu self.omega = omega def decision(self, observation=None): return self.nu, self.omega """# Robot Object""" class IdealRobot: def __init__(self, pose, agent=None, color="black"): self.pose = pose self.r = 0.2 self.color = color self.agent = agent self.poses = [pose] def draw(self, ax, elems): x, y, theta = self.pose xn = x + self.r * math.cos(theta) yn = y + self.r * math.sin(theta) elems += ax.plot([x, xn], [y, yn], color=self.color) c = patches.Circle(xy=(x,y), radius=self.r, fill=False, color=self.color) elems.append(ax.add_patch(c)) self.poses.append(self.pose) elems+=ax.plot( [e[0] for e in self.poses], [e[1] for e in self.poses], linewidth=0.5, color="black") @classmethod def state_transition(cls, nu, omega, delta_t, pose): theta_t_pre = pose[2] if math.fabs(omega) < 1e-10: return pose + np.array([nu * math.cos(theta_t_pre), nu * math.sin(theta_t_pre), omega ]) * delta_t else: return pose + np.array([ nu/omega * (math.sin(theta_t_pre + omega * delta_t) - math.sin(theta_t_pre)), nu/omega * (-math.cos(theta_t_pre + omega * delta_t) + math.cos(theta_t_pre)), omega * delta_t ]) def one_step(self, time_interval): if not self.agent: return nu, omega = self.agent.decision() self.pose = self.state_transition(nu, omega, time_interval, self.pose) class Landmark: def __init__(self, x, y): self.pos = np.array([x, y]).T self.id = None def draw(self, ax, elems): c = ax.scatter(self.pos[0], self.pos[1], s=100, marker="*", label="landmarks", color= "orange") elems.append(c) elems.append(ax.text(self.pos[0], self.pos[1], "id:" + str(self.id), fontsize=10)) class Map: def __init__(self): self.landmarks = [] def append_landmark(self, landmark): landmark.id = len(self.landmarks) self.landmarks.append(landmark) def draw(self, ax, elems): for lm in self.landmarks: lm.draw(ax, elems) # Commented out IPython magic to ensure Python compatibility. # %matplotlib inline world = World(time_span = 36, time_interval = 1, debug=False) straight = Agent(0.2, 0.0) circling = Agent(0.2, 10.0/180*math.pi) robot1 = IdealRobot(np.array([1, 1, math.pi/6]).T, straight) robot2 = IdealRobot(np.array([-2, -1, math.pi/5*6]).T, circling, "red") robot3 = IdealRobot(np.array([0, 0, 0]).T, color="blue") world.append(robot1) world.append(robot2) world.append(robot3) # Map m = Map() m.append_landmark(Landmark(2, -2)) m.append_landmark(Landmark(-1, -3)) m.append_landmark(Landmark(3, 3)) world.append(m) world.draw() # this is needed to show animation whithin colab rc('animation', html='jshtml') ani # or HTML(anim.to_jshtml() $$$
观测方程%%https://milaiai.github.io/blog/post/ch3.3-%E8%A7%82%E6%B5%8B%E6%96%B9%E7%A8%8B/%%2019-03-10%%観測方程式 $$ \begin{pmatrix} \ell_j \\ \varphi_j \end{pmatrix} = \begin{pmatrix} \sqrt{(m_{j,x} - x)^2 + (m_{j,y} - y)^2} \\ \text{atan2}(m_{j,y} - y, m_{j,x} - x) - \theta \end{pmatrix} $$
$z_j = h_j (x)$ $z_j = h(x, m_j)$(ランドマークの位置を変数とする場合) 関数$h_j$: 観測関数 参考代码 class IdealCamera: def __init__(self, env_map, \ distance_range=(0.5, 6.0), direction_range=(-math.pi/3, math.pi/3)): self.map = env_map self.lastdata = [] self.distance_range = distance_range self.direction_range = direction_range def visible(self, polarpos): if polarpos is None: return False return self.%%観測方程式 $$ \begin{pmatrix} \ell_j \\ \varphi_j \end{pmatrix} = \begin{pmatrix} \sqrt{(m_{j,x} - x)^2 + (m_{j,y} - y)^2} \\ \text{atan2}(m_{j,y} - y, m_{j,x} - x) - \theta \end{pmatrix} $$
$z_j = h_j (x)$ $z_j = h(x, m_j)$(ランドマークの位置を変数とする場合) 関数$h_j$: 観測関数 参考代码 class IdealCamera: def __init__(self, env_map, \ distance_range=(0.5, 6.0), direction_range=(-math.pi/3, math.pi/3)): self.map = env_map self.lastdata = [] self.distance_range = distance_range self.direction_range = direction_range def visible(self, polarpos): if polarpos is None: return False return self.distance_range[0] <= polarpos[0] <= self.distance_range[1] \ and self.direction_range[0] <= polarpos[1] <=self.direction_range[1] def data(self, cam_pose): observed = [] for lm in self.map.landmarks: z = self.observation_function(cam_pose, lm.pose) if self.visible(z): observed.append( (z, lm.id) ) self.lastdata = observed return observed @classmethod def observation_function(cls, cam_pose, obj_pose): diff = obj_pose - cam_pose[0:2] phi = math.atan2(diff[1], diff[0]) - cam_pose[2] while phi>=np.pi: phi -= 2*np.pi while phi<-np.pi: phi += 2*np.pi return np.array( [np.hypot(*diff), phi] ).T def draw(self, ax, elems, cam_pose): for obs in self.lastdata: x, y, theta = cam_pose distance, direction = obs[0][0], obs[0][1] lx = x + distance * math.cos(direction + theta) ly = y + distance * math.sin(direction + theta) elems += ax.plot([x, lx], [y, ly], color = "pink") $$$
运动方程%%https://milaiai.github.io/blog/post/ch3.2.4-%E8%BF%90%E5%8A%A8%E6%96%B9%E7%A8%8B/%%2019-03-10%%内容 运动方程, 控制命令, 让机器人动起来。 理论 (Refered From: https://github.com/ryuichiueda/LNPR_SLIDES/blob/master/old_version/figs/robot_motion1.png)
(Refered From: https://github.com/ryuichiueda/LNPR_SLIDES/raw/master/old_version/figs/robot_motion2.png)
相关变量 速度: $nv [m/s]$ 角速度: $\omega [rad/s]$ 制御指令:从 $t-1$ 时刻到$t$时刻的运动指令 $u_t = (\nu_t, \omega_t)$ 制御指令(せいぎょしれい)は離散時刻ごとにしか変えられないことにします。時刻$t-1$からt までの制御指令を$u_t = (\nu_t, \omega_t)$ と表記します。
u 是相对于机器人的,那么其在世界坐标系下的速度应该如何表示。
$$ \begin{pmatrix} \dot{x} \\ \dot{y} \\ \dot{\theta} \end{pmatrix} = \begin{pmatrix} \nu\cos\theta \\ \nu\sin\theta \\ \omega \end{pmatrix} $$
从t-1时刻到t时刻的角度变化:
$$ \theta_t = \theta_{t-1} + \int_{0}^{\delta t} \omega_t dt = \theta_{t-1} + \omega_t \Delta t $$
从t-1时刻到t时刻的位置的变化方程为:
$$ \begin{pmatrix} x_t \\ y_t \end{pmatrix} = \begin{pmatrix} x_{t-1} \\ y_{t-1} \end{pmatrix} + \begin{pmatrix} \int_{0}^{\Delta t}\nu_t cos(\theta_{t-1} + \omega t) dt \\ \int_{0}^{\Delta t}\nu_t sin(\theta_{t-1} + \omega t) dt \end{pmatrix} $$%%内容 运动方程, 控制命令, 让机器人动起来。 理论 (Refered From: https://github.com/ryuichiueda/LNPR_SLIDES/blob/master/old_version/figs/robot_motion1.png)
(Refered From: https://github.com/ryuichiueda/LNPR_SLIDES/raw/master/old_version/figs/robot_motion2.png)
相关变量 速度: $nv [m/s]$ 角速度: $\omega [rad/s]$ 制御指令:从 $t-1$ 时刻到$t$时刻的运动指令 $u_t = (\nu_t, \omega_t)$ 制御指令(せいぎょしれい)は離散時刻ごとにしか変えられないことにします。時刻$t-1$からt までの制御指令を$u_t = (\nu_t, \omega_t)$ と表記します。
u 是相对于机器人的,那么其在世界坐标系下的速度应该如何表示。
$$ \begin{pmatrix} \dot{x} \\ \dot{y} \\ \dot{\theta} \end{pmatrix} = \begin{pmatrix} \nu\cos\theta \\ \nu\sin\theta \\ \omega \end{pmatrix} $$
从t-1时刻到t时刻的角度变化:
$$ \theta_t = \theta_{t-1} + \int_{0}^{\delta t} \omega_t dt = \theta_{t-1} + \omega_t \Delta t $$
从t-1时刻到t时刻的位置的变化方程为:
$$ \begin{pmatrix} x_t \\ y_t \end{pmatrix} = \begin{pmatrix} x_{t-1} \\ y_{t-1} \end{pmatrix} + \begin{pmatrix} \int_{0}^{\Delta t}\nu_t cos(\theta_{t-1} + \omega t) dt \\ \int_{0}^{\Delta t}\nu_t sin(\theta_{t-1} + \omega t) dt \end{pmatrix} $$
机器人的运动方程:(P70)
if $\omega_t == 0$:
$$ \begin{pmatrix} x_t \\ y_t \\ \theta_t \end{pmatrix} = \begin{pmatrix} x_{t-1} \\ y_{t-1} \\ \theta_{t-1} \end{pmatrix} + \begin{pmatrix} \nu_t \cos \theta_{t-1} \\ \nu_t \sin \theta_{t-1} \\ \omega_t \end{pmatrix} \Delta t $$
else:
$$ \begin{pmatrix} x_t \\ y_t \\ \theta_t \end{pmatrix} = \begin{pmatrix} x_{t-1} \\ y_{t-1} \\ \theta_{t-1} \end{pmatrix} + \begin{pmatrix} \nu_t \omega_t^{-1} {\sin( \theta_{t-1} + \omega_t \Delta t ) - \sin \theta_{t-1} } \\ \nu_t \omega_t^{-1} {-\cos( \theta_{t-1} + \omega_t \Delta t ) + \cos \theta_{t-1} } \\ \omega_t \Delta t \end{pmatrix} $$
状態遷移関数
$$ x_t = f(x_{t-1}, u_t), (t= 1,2, 3,…) $$
Example %matplotlib inline import matplotlib.pyplot as plt import matplotlib.patches as patches import math import numpy as np # Animation import matplotlib matplotlib.use('nbagg') import matplotlib.animation as anm from matplotlib import rc %matplotlib inline class World: def __init__(self, debug=False): self.objects = [] self.debug = debug def append(self, obj): self.objects.append(obj) def draw(self): global ani fig = plt.figure(figsize=(4, 4)) plt.close() ax = fig.add_subplot(111) ax.set_aspect('equal') ax.set_xlim(-5, 5) ax.set_ylim(-5, 5) ax.set_xlabel("X", fontsize=20) ax.set_ylabel("Y", fontsize=20) elems = [] if self.debug: for i in range(1000): self.one_step(i, elems, ax) else: ani = anm.FuncAnimation(fig, self.one_step, fargs=(elems, ax), frames=10, interval=1000, repeat=False ) plt.show() def one_step(self, i, elems, ax): while elems: elems.pop().remove() elems.append(ax.text(-4.4, 4.5, "t="+str(i), fontsize=10) ) for obj in self.objects: obj.state_transition(1, 0.0, 1.0) obj.draw(ax, elems) class IdealRobot: def __init__(self, pose, color="black"): self.pose = pose self.r = 0.2 self.color = color def draw(self, ax, elems): x, y, theta = self.pose xn = x + self.r * math.cos(theta) yn = y + self.r * math.sin(theta) elems += ax.plot([x, xn], [y, yn], color=self.color) c = patches.Circle(xy=(x,y), radius=self.r, fill=False, color=self.color) elems.append(ax.add_patch(c)) def state_transition(self, v_t, w_t, delta_t): theta_t_pre = self.pose[2] if math.fabs(w_t) < 1e-10: self.pose += np.array([v_t * math.cos(theta_t_pre), v_t * math.sin(theta_t_pre), w_t ]) * delta_t else: self.pose += np.array([ v_t/w_t * (math.sin(theta_t_pre + w_t * delta_t) - math.sin(theta_t_pre)), v_t/w_t * (-math.cos(theta_t_pre + w_t * delta_t) + math.cos(theta_t_pre)), w_t * delta_t ]) %matplotlib inline world = World(debug=False) robot1 = IdealRobot(np.array([2, 3, math.pi/5*6]).T) robot2 = IdealRobot(np.array([-4, -4, math.pi/4]).T, "red") world.append(robot1) world.append(robot2) world.draw() # this is needed to show animation whithin colab rc('animation', html='jshtml') ani # or HTML(anim.to_jshtml() $$$