使用 SDL2、openGL ES 2.0 (GLSL 1.0) 和 Freetype 渲染字体

Render fonts with SDL2, openGL ES 2.0 (GLSL 1.0) & Freetype

这是我第一次 post 来这里。我正在开发一个旨在在 raspberry pi 3 板上使用的应用程序。我的目标是使用 gpu 而不是 cpu 成功绘制图形,从而使后者足够可用以满足 cpu 应用程序其他部分所需的处理需求,例如数学计算,i/o comms, I2C comms, virtual serial port comms etc.

到目前为止,我已经能够使用 SDL2 和 opengles 2.0 通过 glDrawArrays() 绘制线条,除其他外,遵循来自 https://keasigmadelta.com/store/gles3sdl2-tutorial/ 的一篇论文的教程。我在启用了 GL 驱动程序的 raspberry pi 3 中成功尝试支持 openGL ES 2.0 和 GLSL 1.0。

我正在尝试使用 SDL2 渲染字体,它提供了非常好的功能,可以让您定义颜色和大小以及结果渲染的其他参数。尽管使用 SDL2 函数渲染字体 TTF_RenderText_Blended(...) 给了我完美的结果,合理的时间和 cpu 在我的英特尔核心 2 四核 9550 2.8GHz 上的开销,我不能说同样的raspberry pi 3. 尝试在我的 raspi3 上使用带有 glDrawArrays() 的 gpu 给了我令人印象深刻的结果,几乎 5-10% cpu 负载,随机选择的顶点之间有 1000 条线被绘制 > 50 次第二。但是我需要使用 gpu 而不是 cpu 来渲染字体,因为我的 raspi3 上的普通 SDL2 字体渲染导致 50-60% cpu 负载,这让我没有 space 其他数学计算等。

在互联网上搜索了 2-3 天后,我决定按照来自 https://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Text_Rendering_01 的教程进行操作,但是其中不包括 SDL2。尽管我能够编译代码,但我在下面给出了没有错误的信息: g++ -Wall main.cpp -I/usr/include/freetype2 -lSDL2 -lSDL2_ttf -lGL -lGLEW -lfreetype -o main 。您可以排除一些选项,因为它们指的是正在使用的其他库,例如 ttf.

请记住,该代码已经可以与 glDrawArrays() 一起正常工作,尽管没有提及它。

有些人会觉得我的一些评论很荒谬,我知道。

我从以下代码中得到的是一个颜色由 glClearColor(0.5, 0.5, 0.5, 1); 设置的屏幕;这是灰色的,这正是我得到的。 没有其他事情发生。在调试方面,我在函数 renderText() 中放置了一个 SDL_Log() 。如果您注意到,display() 函数包含 2 次对 renderText() 的调用。 2 SDL_Log() 函数在应用 window:

中为我提供了以下内容
INFO: Debug info: glyph w: 0, glyph rows: 0
INFO: Debug info: glyph w: 25, glyph rows: 36

我没有更多的信息可以给你了。你能帮我完成字体渲染吗?

有一件事是肯定的。我必须坐下来学习一些 opengles 和 GLSL。

顶点着色器内容

#version 100

attribute vec4 coord;
varying vec2 texcoord;

void main(void) {
  gl_Position = vec4(coord.xy, 0, 1);
  texcoord = coord.zw;
}

片段着色器内容

#version 100

#ifdef GL_ES
  precision highp float;
#endif

varying vec2 texcoord;
uniform sampler2D tex;
uniform vec4 color;

void main(void) {
  gl_FragColor = vec4(1, 1, 1, texture2D(tex, texcoord).r) * color;
}

编译为:

g++ -Wall main.cpp -I/usr/include/freetype2 -lSDL2 -lGL -lGLEW -lfreetype -o main

源码在LinuxMint 18.3(ubuntu16.04)中测试,如下:

// Standard libs
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>

// Add -lSDL2
// Found at /usr/local/include/SDL2
#include <SDL2/SDL.h>

// Add -lGL and -lGLEW to compiler
// Found at /usr/include/GL
#include <GL/glew.h>
#include <GL/glu.h>

// Add -lfreetype and -I/usr/include/freetype2 to compiler
// Found at /usr/include/freetype2/freetype/config/
#include <ft2build.h>
#include FT_FREETYPE_H

SDL_Window *window=NULL;
SDL_GLContext openGLContext;
FT_Library ft=NULL;
FT_Face face;
GLuint shaderProg;

typedef struct {
  float position[2];
} Vertex;

// The function render_text() takes 5 arguments: the string to render, the x and y start coordinates, 
// and the x and y scale parameters. The last two should be chosen such that one glyph pixel corresponds 
// to one screen pixel. Let's look at the display() function which draws the whole screen:
void render_text(const char *text, float x, float y, float sx, float sy) {
  const char *p;

  FT_GlyphSlot g = face->glyph;

  SDL_Log("Debug info: glyph w: %d, glyph rows: %d", g->bitmap.width, g->bitmap.rows);

  for(p = text; *p; p++) {

    // If FT_Load_Char() returns a non-zero value then the glyph in *p could not be loaded
    if(FT_Load_Char(face, *p, FT_LOAD_RENDER))
        continue;

    glTexImage2D(
      GL_TEXTURE_2D,
      0,
      GL_RED,
      g->bitmap.width,
      g->bitmap.rows,
      0,
      GL_RED,
      GL_UNSIGNED_BYTE,
      g->bitmap.buffer
    );

    float x2 = x + g->bitmap_left * sx;
    float y2 = -y - g->bitmap_top * sy;
    float w = g->bitmap.width * sx;
    float h = g->bitmap.rows * sy;

    GLfloat box[4][4] = {
        {x2,     -y2    , 0, 0},
        {x2 + w, -y2    , 1, 0},
        {x2,     -y2 - h, 0, 1},
        {x2 + w, -y2 - h, 1, 1},
    };

    glBufferData(GL_ARRAY_BUFFER, sizeof box, box, GL_DYNAMIC_DRAW);
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    x += (g->advance.x/64) * sx;
    y += (g->advance.y/64) * sy;
  }
}


void display(void) {

  // I had to add the three next lines of code because the 1st param to glUniform4fv() was unreferenced in the Wiki tutorial.
  // After looking at: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glGetUniformLocation.xhtml
  // I concluded that i had to use glGetUniformLocation() to get the uLocation but i had no idea what to pass as 2nd param to glGetUniformLocation()
  // Docs say regarding the 2nd param: Points to a null terminated string containing the name of the uniform variable whose location is to be queried.
  // There is a var called uniform_location in the fragment shader so i had i pick one from there. I know i sound ridiculus...
  int w=0, h=0;
  SDL_GetWindowSize(window, &w, &h);
  GLint uLocation = glGetUniformLocation(shaderProg, "sample2D");

  // Clear the invisible buffer with the color specified
  glClearColor(0.5, 0.5, 0.5, 1);
  glClear(GL_COLOR_BUFFER_BIT);

  GLfloat black[4] = {0, 0, 0, 1};
  glUniform4fv(uLocation, 1, black);

  float sx = 2.0 / (float)w;
  float sy = 2.0 / (float)h;

  render_text("The Quick Brown Fox Jumps Over The Lazy Dog", -1 + 8 * sx,   1 - 50 * sy, sx, sy);
  render_text("The Misaligned Fox Jumps Over The Lazy Dog", -1 + 8.5 * sx, 1 - 100.5 * sy, sx, sy);

  // I replaced glutSwapBuffers(); with the following
  SDL_GL_SwapWindow(window);
}

void shaderProgDestroy(GLuint shaderProg) {
  glDeleteProgram(shaderProg);
}

/** Destroys a shader.
*/
static void shaderDestroy(GLuint shaderID) {
glDeleteShader(shaderID);
}

/** Gets the file's length.
*
* @param file the file
*
* @return size_t the file's length in bytes
*/
static size_t fileGetLength(FILE *file) {
  size_t length;
  size_t currPos = ftell(file);
  fseek(file, 0, SEEK_END);
  length = ftell(file);
  // Return the file to its previous position
  fseek(file, currPos, SEEK_SET);
  return length;
}


/** Loads and compiles a shader from a file.
*
* This will print any errors to the console.
*
* @param filename the shader's filename
* @param shaderType the shader type (e.g., GL_VERTEX_SHADER)
*
* @return GLuint the shader's ID, or 0 if failed
*/
static GLuint shaderLoad(const char *filename, GLenum shaderType) {
  FILE *file = fopen(filename, "r");
  if (!file) {
    SDL_Log("Can't open file: %s\n", filename);
    return 0;
  }
  size_t length = fileGetLength(file);
  // Alloc space for the file (plus '[=14=]' termination)
  GLchar *shaderSrc = (GLchar*)calloc(length + 1, 1);
  if (!shaderSrc) {
    SDL_Log("Out of memory when reading file: %s\n", filename);
    fclose(file);
    file = NULL;
    return 0;
  }
  fread(shaderSrc, 1, length, file);
  // Done with the file
  fclose(file);
  file = NULL;

  // Create the shader
  GLuint shader = glCreateShader(shaderType);

  glShaderSource(shader, 1, (const GLchar**)&shaderSrc, NULL);
  free(shaderSrc);
  shaderSrc = NULL;
  // Compile it
  glCompileShader(shader);
  GLint compileSucceeded = GL_FALSE;
  glGetShaderiv(shader, GL_COMPILE_STATUS, &compileSucceeded);
  if (!compileSucceeded) {
    // Compilation failed. Print error info
    SDL_Log("Compilation of shader %s failed:\n", filename);
    GLint logLength = 0;
    glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &logLength);
    GLchar *errLog = (GLchar*)malloc(logLength);
    if (errLog) {
      glGetShaderInfoLog(shader, logLength, &logLength, errLog);
      SDL_Log("%s\n", errLog);
      free(errLog);
    }
    else {
      SDL_Log("Couldn't get shader log; out of memory\n");
    }
    glDeleteShader(shader);
    shader = 0;
  }
  return shader;
}

GLuint shaderProgLoad(const char *vertFilename, const char *fragFilename) {

  // Load vertex shader file from disk
  GLuint vertShader = shaderLoad(vertFilename, GL_VERTEX_SHADER);
  if (!vertShader) {
    SDL_Log("Couldn't load vertex shader: %s\n", vertFilename);
    return 0;
  }

  // Load fragment shader file from disk
  GLuint fragShader = shaderLoad(fragFilename, GL_FRAGMENT_SHADER);
  if (!fragShader) {
    SDL_Log("Couldn't load fragment shader: %s\n", fragFilename);
    shaderDestroy(vertShader);
    vertShader = 0;
    return 0;
  }

  // Create a shader program out of the two (or more) shaders loaded
  GLuint shaderProg = glCreateProgram();
  if (shaderProg) {
    // Attach the the two shaders to the program
    glAttachShader(shaderProg, vertShader);
    glAttachShader(shaderProg, fragShader);
    // Link the two shaders together
    glLinkProgram(shaderProg);

    GLint linkingSucceeded = GL_FALSE;

    // Get a status (true or false) of the linking process
    glGetProgramiv(shaderProg, GL_LINK_STATUS, &linkingSucceeded);

    // Handle the error if linking the two shaders went wrong
    if (!linkingSucceeded) {
      SDL_Log("Linking shader failed (vert. shader: %s, frag. shader: %s\n", vertFilename, fragFilename);
      GLint logLength = 0;
      glGetProgramiv(shaderProg, GL_INFO_LOG_LENGTH, &logLength);
      GLchar *errLog = (GLchar*)malloc(logLength);
      if (errLog) {
        glGetProgramInfoLog(shaderProg, logLength, &logLength, errLog);
        SDL_Log("%s\n", errLog);
        free(errLog);
      }
      else {
        SDL_Log("Couldn't get shader link log; out of memory\n");
      }
      glDeleteProgram(shaderProg);
      shaderProg = 0;
    }
  }
  else {
    SDL_Log("Couldn't create shader program\n");
  }

  // Free resources
  shaderDestroy(vertShader);
  shaderDestroy(fragShader);

  // Return the resulting shader program
  return shaderProg;
}

/** Creates the Vertex Buffer Object (VBO) containing
* the given vertices.
*
* @param vertices pointer to the array of vertices
* @param numVertices the number of vertices in the array
*/
GLuint vboCreate(Vertex *vertices, GLuint numVertices) {
  // Create the Vertex Buffer Object
  GLuint vbo;
  int nBuffers = 1;
  // Create a buffer
  glGenBuffers(nBuffers, &vbo);
  // Make the buffer a VBO buffer
  glBindBuffer(GL_ARRAY_BUFFER, vbo);
  // Copy the vertices data in the buffer, and deactivate with glBindBuffer(GL_ARRAY_BUFFER, 0);
  glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * numVertices, vertices, GL_STATIC_DRAW);
  glBindBuffer(GL_ARRAY_BUFFER, 0);
  // Check for problems
  GLenum err = glGetError();
  if (err != GL_NO_ERROR) {
    // Failed
    glDeleteBuffers(nBuffers, &vbo);
    SDL_Log("Creating VBO failed, code %u\n", err);
    vbo = 0;
  }
  return vbo;
}


/** Frees the VBO.
*
* @param vbo the VBO's name.
*/
void vboFree(GLuint vbo) {
  glDeleteBuffers(1, &vbo);
}

void freeResources(void){
    SDL_GL_DeleteContext(openGLContext);
  shaderProgDestroy(shaderProg);
  SDL_Quit();
  SDL_DestroyWindow(window);
}


int main(int argc, char* args[]){
    // SDL2 video init
    SDL_Init( SDL_INIT_VIDEO | SDL_INIT_TIMER );

    // Setting openGL attributes
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
    // Enable double buffering
    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
    // Enable hardware accelaration if available
    SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
    glewExperimental = GL_TRUE;

    // Get window
    window = SDL_CreateWindow( "Test", SDL_WINDOWPOS_UNDEFINED, 
    SDL_WINDOWPOS_UNDEFINED, 800, 600, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
    // Get openGL context
    openGLContext = SDL_GL_CreateContext(window);
    // Init glew
    glewInit();

    // ft globally defined with FT_Library ft
    FT_Init_FreeType(&ft);

    // face globally defined with FT_Face face;
    FT_New_Face(ft, "LiberationMono-Bold.ttf", 0, &face);


    // All return values of the init functions above since the point where main starts are normal. No errors are returned.
    // I have skipped the if conditions for the shake of simplicity


    // Load vertex & fragment shaders, compile them, link them together, make a program and return it
    shaderProg = shaderProgLoad("shaderV1.vert", "shaderV1.frag");
    // Activate the program
    glUseProgram(shaderProg);

    // The code up to this point works fine


    // This is where the code from wikipedia starts
    FT_Set_Pixel_Sizes(face, 0, 48);

    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    GLuint vbo;

    // I set the var attribute_coord. Is this right? The code from Wiki did not have any initializations for this variable.
    GLuint attribute_coord=0;

    glGenBuffers(1, &vbo);
    glEnableVertexAttribArray(attribute_coord);
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
    glVertexAttribPointer(attribute_coord, 4, GL_FLOAT, GL_FALSE, 0, 0);

    display();
    // This is where the code from wikipedia ends

    while (1){

      // Wait
      SDL_Delay(10);
    }


    // A function that free resources
    freeResources();

    return 0;
  }

问题很简单,你忘了设置uniform变量texcolortex在你的情况下不是必需的,因为它默认设置为0 ).

在程序链接(glLinkProgram)后,通过glGetUniformLocation确定活动程序资源texcolor的统一位置。 通过glUniform1i分别设置uniformsglUniform4fv,shader程序变成当前程序后(gUseProgram):

shaderProg = shaderProgLoad("shaderV1.vert", "shaderV1.frag");

GLuint tex_loc   = glGetUniformLocation( shaderProg, "tex" );
GLuint color_loc = glGetUniformLocation( shaderProg, "color" );

// Activate the program
glUseProgram(shaderProg);

glUniform1i( tex_loc, 0 ); // 0, because the texture is bound to of texture unit 0
float col[4] = { 1.0f, 0.0f, 0.0, 1.0f }; // red and opaque
glUniform4fv( color_loc, 1, col);