如何将CRFAsRNN移植到caffe-windows上去

 我来答
盘默M2
2016-12-14 · TA获得超过2.9万个赞
知道大有可为答主
回答量:9723
采纳率:93%
帮助的人:8234万
展开全部
(1)移植辅助的文件
将include/caffe/util/下的coords.hpp和modified_permutohedral.hpp复制到caffe-windows对应的目录将src/caffe/util/modified_permutohedral.cpp复制到对应的目录中去

(2)移植Layer中的特性
在include/caffe/layer.hpp中添加如下代码:#include "caffe/util/coords.hpp"
和以下代码: virtual DiagonalAffineMap<Dtype> coord_map() { NOT_IMPLEMENTED; // suppress warnings return DiagonalAffineMap<Dtype>(vector<pair<Dtype, Dtype> >()); }修改后的文件如下:#ifndef CAFFE_LAYER_H_#define CAFFE_LAYER_H_
#include <algorithm>#include <string>#include <vector>
#include "caffe/blob.hpp"#include "caffe/common.hpp"#include "caffe/layer_factory.hpp"#include "caffe/proto/caffe.pb.h"#include "caffe/util/coords.hpp"#include "caffe/util/math_functions.hpp"
/** Forward declare boost::thread instead of including boost/thread.hpp to avoid a boost/NVCC issues (#1009, #1010) on OSX. */namespace boost { class mutex; }
namespace caffe {
/** * @brief An interface for the units of computation which can be composed into a * Net. * * Layer%s must implement a Forward function, in which they take their input * (bottom) Blob%s (if any) and compute their output Blob%s (if any). * They may also implement a Backward function, in which they compute the error * gradients with respect to their input Blob%s, given the error gradients with * their output Blob%s. */template <typename Dtype>class Layer { public: /** * You should not implement your own constructor. Any set up code should go * to SetUp(), where the dimensions of the bottom blobs are provided to the * layer. */ explicit Layer(const LayerParameter& param) : layer_param_(param), is_shared_(false) { // Set phase and copy blobs (if there are any). phase_ = param.phase(); if (layer_param_.blobs_size() > 0) { blobs_.resize(layer_param_.blobs_size()); for (int i = 0; i < layer_param_.blobs_size(); ++i) { blobs_[i].reset(new Blob<Dtype>()); blobs_[i]->FromProto(layer_param_.blobs(i)); } } } virtual ~Layer() {}
/** * @brief Implements common layer setup functionality. * * @param bottom the preshaped input blobs * @param top * the allocated but unshaped output blobs, to be shaped by Reshape * * Checks that the number of bottom and top blobs is correct. * Calls LayerSetUp to do special layer setup for individual layer types, * followed by Reshape to set up sizes of top blobs and internal buffers. * Sets up the loss weight multiplier blobs for any non-zero loss weights. * This method may not be overridden. */ void SetUp(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) { InitMutex(); CheckBlobCounts(bottom, top); LayerSetUp(bottom, top); Reshape(bottom, top); SetLossWeights(top); }
/** * @brief Does layer-specific setup: your layer should implement this function * as well as Reshape. * * @param bottom * the preshaped input blobs, whose data fields store the input data for * this layer * @param top * the allocated but unshaped output blobs * * This method should do one-time layer specific setup. This includes reading * and processing relevent parameters from the <code>layer_param_</code>. * Setting up the shapes of top blobs and internal buffers should be done in * <code>Reshape</code>, which will be called before the forward pass to * adjust the top blob sizes. */ virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {}
/** * @brief Whether a layer should be shared by multiple nets during data * parallelism. By default, all layers except for data layers should * not be shared. data layers should be shared to ensure each worker * solver access data sequentially during data parallelism. */ virtual inline bool ShareInParallel() const { return false; }
/** @brief Return whether this layer is actually shared by other nets. * If ShareInParallel() is true and using more than one GPU and the * net has TRAIN phase, then this function is expected return true. */ inline bool IsShared() const { return is_shared_; }
/** @brief Set whether this layer is actually shared by other nets * If ShareInParallel() is true and using more than one GPU and the * net has TRAIN phase, then is_shared should be set true. */ inline void SetShared(bool is_shared) { CHECK(ShareInParallel() || !is_shared) << type() << "Layer does not support sharing."; is_shared_ = is_shared; }
/** * @brief Adjust the shapes of top blobs and internal buffers to accommodate * the shapes of the bottom blobs. * * @param bottom the input blobs, with the requested input shapes * @param top the top blobs, which should be reshaped as needed * * This method should reshape top blobs as needed according to the shapes * of the bottom (input) blobs, as well as reshaping any internal buffers * and making any other necessary adjustments so that the layer can * accommodate the bottom blobs. */ virtual void Reshape(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) = 0;
/** * @brief Given the bottom blobs, compute the top blobs and the loss. * * @param bottom * the input blobs, whose data fields store the input data for this layer * @param top * the preshaped output blobs, whose data fields will store this layers' * outputs * /return The total loss from the layer. * * The Forward wrapper calls the relevant device wrapper function * (Forward_cpu or Forward_gpu) to compute the top blob values given the * bottom blobs. If the layer has any non-zero loss_weights, the wrapper * then computes and returns the loss. * * Your layer should implement Forward_cpu and (optionally) Forward_gpu. */ inline Dtype Forward(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);
/** * @brief Given the top blob error gradients, compute the bottom blob error * gradients. * * @param top * the output blobs, whose diff fields store the gradient of the error * with respect to themselves * @param propagate_down * a vector with equal length to bottom, with each index indicating * whether to propagate the error gradients down to the bottom blob at * the corresponding index * @param bottom * the input blobs, whose diff fields will store the gradient of the error * with respect to themselves after Backward is run * * The Backward wrapper calls the relevant device wrapper function * (Backward_cpu or Backward_gpu) to compute the bottom blob diffs given the * top blob diffs. * * Your layer should implement Backward_cpu and (optionally) Backward_gpu. */ inline void Backward(const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);
/** * @brief Returns the vector of learnable parameter blobs. */ vector<shared_ptr<Blob<Dtype> > >& blobs() { return blobs_; }
/** * @brief Returns the layer parameter. */ const LayerParameter& layer_param() const { return layer_param_; }
/** * @brief Writes the layer parameter to a protocol buffer */ virtual void ToProto(LayerParameter* param, bool write_diff = false);
/** * @brief Returns the scalar loss associated with a top blob at a given index. */ inline Dtype loss(const int top_index) const { return (loss_.size() > top_index) ? loss_[top_index] : Dtype(0); }
推荐律师服务: 若未解决您的问题,请您详细描述您的问题,通过百度律临进行免费专业咨询

为你推荐:

下载百度知道APP,抢鲜体验
使用百度知道APP,立即抢鲜体验。你的手机镜头里或许有别人想知道的答案。
扫描二维码下载
×

类别

我们会通过消息、邮箱等方式尽快将举报结果通知您。

说明

0/200

提交
取消

辅 助

模 式