如何将CRFAsRNN移植到caffe-windows上去
1个回答
展开全部
(1)移植辅助的文件
将include/caffe/util/下的coords.hpp和modified_permutohedral.hpp复制到caffe-windows对应的目录将src/caffe/util/modified_permutohedral.cpp复制到对应的目录中去
(2)移植Layer中的特性
在include/caffe/layer.hpp中添加如下代码:#include "caffe/util/coords.hpp"
和以下代码: virtual DiagonalAffineMap<Dtype> coord_map() { NOT_IMPLEMENTED; // suppress warnings return DiagonalAffineMap<Dtype>(vector<pair<Dtype, Dtype> >()); }修改后的文件如下:#ifndef CAFFE_LAYER_H_#define CAFFE_LAYER_H_
#include <algorithm>#include <string>#include <vector>
#include "caffe/blob.hpp"#include "caffe/common.hpp"#include "caffe/layer_factory.hpp"#include "caffe/proto/caffe.pb.h"#include "caffe/util/coords.hpp"#include "caffe/util/math_functions.hpp"
/** Forward declare boost::thread instead of including boost/thread.hpp to avoid a boost/NVCC issues (#1009, #1010) on OSX. */namespace boost { class mutex; }
namespace caffe {
/** * @brief An interface for the units of computation which can be composed into a * Net. * * Layer%s must implement a Forward function, in which they take their input * (bottom) Blob%s (if any) and compute their output Blob%s (if any). * They may also implement a Backward function, in which they compute the error * gradients with respect to their input Blob%s, given the error gradients with * their output Blob%s. */template <typename Dtype>class Layer { public: /** * You should not implement your own constructor. Any set up code should go * to SetUp(), where the dimensions of the bottom blobs are provided to the * layer. */ explicit Layer(const LayerParameter& param) : layer_param_(param), is_shared_(false) { // Set phase and copy blobs (if there are any). phase_ = param.phase(); if (layer_param_.blobs_size() > 0) { blobs_.resize(layer_param_.blobs_size()); for (int i = 0; i < layer_param_.blobs_size(); ++i) { blobs_[i].reset(new Blob<Dtype>()); blobs_[i]->FromProto(layer_param_.blobs(i)); } } } virtual ~Layer() {}
/** * @brief Implements common layer setup functionality. * * @param bottom the preshaped input blobs * @param top * the allocated but unshaped output blobs, to be shaped by Reshape * * Checks that the number of bottom and top blobs is correct. * Calls LayerSetUp to do special layer setup for individual layer types, * followed by Reshape to set up sizes of top blobs and internal buffers. * Sets up the loss weight multiplier blobs for any non-zero loss weights. * This method may not be overridden. */ void SetUp(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) { InitMutex(); CheckBlobCounts(bottom, top); LayerSetUp(bottom, top); Reshape(bottom, top); SetLossWeights(top); }
/** * @brief Does layer-specific setup: your layer should implement this function * as well as Reshape. * * @param bottom * the preshaped input blobs, whose data fields store the input data for * this layer * @param top * the allocated but unshaped output blobs * * This method should do one-time layer specific setup. This includes reading * and processing relevent parameters from the <code>layer_param_</code>. * Setting up the shapes of top blobs and internal buffers should be done in * <code>Reshape</code>, which will be called before the forward pass to * adjust the top blob sizes. */ virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {}
/** * @brief Whether a layer should be shared by multiple nets during data * parallelism. By default, all layers except for data layers should * not be shared. data layers should be shared to ensure each worker * solver access data sequentially during data parallelism. */ virtual inline bool ShareInParallel() const { return false; }
/** @brief Return whether this layer is actually shared by other nets. * If ShareInParallel() is true and using more than one GPU and the * net has TRAIN phase, then this function is expected return true. */ inline bool IsShared() const { return is_shared_; }
/** @brief Set whether this layer is actually shared by other nets * If ShareInParallel() is true and using more than one GPU and the * net has TRAIN phase, then is_shared should be set true. */ inline void SetShared(bool is_shared) { CHECK(ShareInParallel() || !is_shared) << type() << "Layer does not support sharing."; is_shared_ = is_shared; }
/** * @brief Adjust the shapes of top blobs and internal buffers to accommodate * the shapes of the bottom blobs. * * @param bottom the input blobs, with the requested input shapes * @param top the top blobs, which should be reshaped as needed * * This method should reshape top blobs as needed according to the shapes * of the bottom (input) blobs, as well as reshaping any internal buffers * and making any other necessary adjustments so that the layer can * accommodate the bottom blobs. */ virtual void Reshape(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) = 0;
/** * @brief Given the bottom blobs, compute the top blobs and the loss. * * @param bottom * the input blobs, whose data fields store the input data for this layer * @param top * the preshaped output blobs, whose data fields will store this layers' * outputs * /return The total loss from the layer. * * The Forward wrapper calls the relevant device wrapper function * (Forward_cpu or Forward_gpu) to compute the top blob values given the * bottom blobs. If the layer has any non-zero loss_weights, the wrapper * then computes and returns the loss. * * Your layer should implement Forward_cpu and (optionally) Forward_gpu. */ inline Dtype Forward(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);
将include/caffe/util/下的coords.hpp和modified_permutohedral.hpp复制到caffe-windows对应的目录将src/caffe/util/modified_permutohedral.cpp复制到对应的目录中去
(2)移植Layer中的特性
在include/caffe/layer.hpp中添加如下代码:#include "caffe/util/coords.hpp"
和以下代码: virtual DiagonalAffineMap<Dtype> coord_map() { NOT_IMPLEMENTED; // suppress warnings return DiagonalAffineMap<Dtype>(vector<pair<Dtype, Dtype> >()); }修改后的文件如下:#ifndef CAFFE_LAYER_H_#define CAFFE_LAYER_H_
#include <algorithm>#include <string>#include <vector>
#include "caffe/blob.hpp"#include "caffe/common.hpp"#include "caffe/layer_factory.hpp"#include "caffe/proto/caffe.pb.h"#include "caffe/util/coords.hpp"#include "caffe/util/math_functions.hpp"
/** Forward declare boost::thread instead of including boost/thread.hpp to avoid a boost/NVCC issues (#1009, #1010) on OSX. */namespace boost { class mutex; }
namespace caffe {
/** * @brief An interface for the units of computation which can be composed into a * Net. * * Layer%s must implement a Forward function, in which they take their input * (bottom) Blob%s (if any) and compute their output Blob%s (if any). * They may also implement a Backward function, in which they compute the error * gradients with respect to their input Blob%s, given the error gradients with * their output Blob%s. */template <typename Dtype>class Layer { public: /** * You should not implement your own constructor. Any set up code should go * to SetUp(), where the dimensions of the bottom blobs are provided to the * layer. */ explicit Layer(const LayerParameter& param) : layer_param_(param), is_shared_(false) { // Set phase and copy blobs (if there are any). phase_ = param.phase(); if (layer_param_.blobs_size() > 0) { blobs_.resize(layer_param_.blobs_size()); for (int i = 0; i < layer_param_.blobs_size(); ++i) { blobs_[i].reset(new Blob<Dtype>()); blobs_[i]->FromProto(layer_param_.blobs(i)); } } } virtual ~Layer() {}
/** * @brief Implements common layer setup functionality. * * @param bottom the preshaped input blobs * @param top * the allocated but unshaped output blobs, to be shaped by Reshape * * Checks that the number of bottom and top blobs is correct. * Calls LayerSetUp to do special layer setup for individual layer types, * followed by Reshape to set up sizes of top blobs and internal buffers. * Sets up the loss weight multiplier blobs for any non-zero loss weights. * This method may not be overridden. */ void SetUp(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) { InitMutex(); CheckBlobCounts(bottom, top); LayerSetUp(bottom, top); Reshape(bottom, top); SetLossWeights(top); }
/** * @brief Does layer-specific setup: your layer should implement this function * as well as Reshape. * * @param bottom * the preshaped input blobs, whose data fields store the input data for * this layer * @param top * the allocated but unshaped output blobs * * This method should do one-time layer specific setup. This includes reading * and processing relevent parameters from the <code>layer_param_</code>. * Setting up the shapes of top blobs and internal buffers should be done in * <code>Reshape</code>, which will be called before the forward pass to * adjust the top blob sizes. */ virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {}
/** * @brief Whether a layer should be shared by multiple nets during data * parallelism. By default, all layers except for data layers should * not be shared. data layers should be shared to ensure each worker * solver access data sequentially during data parallelism. */ virtual inline bool ShareInParallel() const { return false; }
/** @brief Return whether this layer is actually shared by other nets. * If ShareInParallel() is true and using more than one GPU and the * net has TRAIN phase, then this function is expected return true. */ inline bool IsShared() const { return is_shared_; }
/** @brief Set whether this layer is actually shared by other nets * If ShareInParallel() is true and using more than one GPU and the * net has TRAIN phase, then is_shared should be set true. */ inline void SetShared(bool is_shared) { CHECK(ShareInParallel() || !is_shared) << type() << "Layer does not support sharing."; is_shared_ = is_shared; }
/** * @brief Adjust the shapes of top blobs and internal buffers to accommodate * the shapes of the bottom blobs. * * @param bottom the input blobs, with the requested input shapes * @param top the top blobs, which should be reshaped as needed * * This method should reshape top blobs as needed according to the shapes * of the bottom (input) blobs, as well as reshaping any internal buffers * and making any other necessary adjustments so that the layer can * accommodate the bottom blobs. */ virtual void Reshape(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) = 0;
/** * @brief Given the bottom blobs, compute the top blobs and the loss. * * @param bottom * the input blobs, whose data fields store the input data for this layer * @param top * the preshaped output blobs, whose data fields will store this layers' * outputs * /return The total loss from the layer. * * The Forward wrapper calls the relevant device wrapper function * (Forward_cpu or Forward_gpu) to compute the top blob values given the * bottom blobs. If the layer has any non-zero loss_weights, the wrapper * then computes and returns the loss. * * Your layer should implement Forward_cpu and (optionally) Forward_gpu. */ inline Dtype Forward(const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top);
已赞过
已踩过<
评论
收起
你对这个回答的评价是?
推荐律师服务:
若未解决您的问题,请您详细描述您的问题,通过百度律临进行免费专业咨询